Yudkowsky apparently defines the term “FOOM” here:
“FOOM” means way the hell smarter than anything else around, capable of delivering in short time periods technological advancements that would take humans decades, probably including full-scale molecular nanotechnology [...]
It’s weird and doesn’t seem to make much sense to me. How can the term “FOOM” be used to refer to a level of capability?
We should probably scratch that definition—even though it is about the only one provided.
If the term “FOOM” has to be used, it should probably refer to actual rapid progress, not merely to a capability of producing technologies rapidly.
I suppose it makes sense if we assume he was actually describing a product of FOOM rather than the process itself.
Creating molecular nanotechnology may be given as homework in the 29th century—but that’s quite a different idea to there being rapid technological progress between now and then. You can attain large capabilities by slow and gradual progress—as well as via a sudden rapid burst.
Yeah it’s a terrible definition. I think the AI-FOOM debate provides a reasonable grounding for the term “FOOM”, though I agree that it’s important to have a concise definition at hand.
In the post, I used FOOM to mean an optimization process optimizing itself in an open-ended way.[1] I assumed that this corresponded to other people’s understanding of FOOM, but I’m happy to be corrected.
I would use the term “singularity” to refer more generally to periods of rapid progress, so e.g. I’d be comfortable saying that FOOM is one kind of process that could lead to a singularity, though not exclusively so. Does this match with the common understanding of these terms?
[1] Perhaps that last “open-ended” clause just re-captures all the mystery, but it seems necessary to exclude examples like a compiler making itself faster but then making no further improvements.
An AI is developed to optimise some utility function or solve a particular problem.
It decides that the best way to go about this is to build another, better AI to solve the problem for it.
The nature of the problem is such that the best course of action for an agent of any conceivable level of intelligence is to first build a more intelligent AI.
The process continues until we reach an AI of an inconceivable level of intelligence.
Yudkowsky apparently defines the term “FOOM” here:
It’s weird and doesn’t seem to make much sense to me. How can the term “FOOM” be used to refer to a level of capability?
I agree, though I suppose it makes sense if we assume he was actually describing a product of FOOM rather than the process itself.
We should probably scratch that definition—even though it is about the only one provided.
If the term “FOOM” has to be used, it should probably refer to actual rapid progress, not merely to a capability of producing technologies rapidly.
Creating molecular nanotechnology may be given as homework in the 29th century—but that’s quite a different idea to there being rapid technological progress between now and then. You can attain large capabilities by slow and gradual progress—as well as via a sudden rapid burst.
Yeah it’s a terrible definition. I think the AI-FOOM debate provides a reasonable grounding for the term “FOOM”, though I agree that it’s important to have a concise definition at hand.
In the post, I used FOOM to mean an optimization process optimizing itself in an open-ended way.[1] I assumed that this corresponded to other people’s understanding of FOOM, but I’m happy to be corrected.
I would use the term “singularity” to refer more generally to periods of rapid progress, so e.g. I’d be comfortable saying that FOOM is one kind of process that could lead to a singularity, though not exclusively so. Does this match with the common understanding of these terms?
[1] Perhaps that last “open-ended” clause just re-captures all the mystery, but it seems necessary to exclude examples like a compiler making itself faster but then making no further improvements.
My understanding of the FOOM process:
An AI is developed to optimise some utility function or solve a particular problem.
It decides that the best way to go about this is to build another, better AI to solve the problem for it.
The nature of the problem is such that the best course of action for an agent of any conceivable level of intelligence is to first build a more intelligent AI.
The process continues until we reach an AI of an inconceivable level of intelligence.