I really enjoyed reading this. Quite concise, well organised and I thought quite comprehensive (nothing is ever exhaustive so no need to apologise on that front). I will find this a very useful resource and while nothing in it was completely “new” to me I found the structure really helped me to think more clearly about this. So thanks.
A suggestion—might be useful to turn your attention more to specific process steps using the attention directing classification tools outlined here. For example
Step 1: Identify type of risk (transparent, Opaque, Knightian)
Step 2: List mitigation strategies for risk type—consider pros/cons for each strategy
Step 3: Weight strategy effectiveness according to pros/cons and your ability to undertake
etc—that’s just off the cuff—I’m sure you can do better :)
One minor point on AGI—how can you ” get a bunch of forecasting experts together ” on something that doesn’t exist and on which there is not even clear agreement around what it actually is?
I’m sure you are familiar with the astonishingly poor record on forecasts about AGI arrival (a bit like nuclear fusion and at least that’s reasonably well defined)
For someone to be a “forecasting expert” on anything they have to have a track record of reliably forecasting something—WITH FEEDBACK—about their accuracy (which they use to improve). By definition such experts do not exists for something that has not yet come into being and around which there isn’t a specific and clear definition/description. You might start by first gaining a real consensus on a very specific description of what it is you’re forecasting for and then maybe search for forecasting expertise in a similar area that already exists. But I think that would be difficult. AGI “forecasting” is replete with confirmation bias and wishful thinking (and if you challenge that you get the same sort of response you get from challenging religious people over the existence of their deity ;->)
Thanks again—loved it
I have not been to one of these before. I think I should be able to get there depending on my daughter’s work schedule. Is it okay just to turn up? :)
Whilst I appreciate the validity of criticism offered here of the use of the word emergence (by itself) as if were an explanation sufficient unto itself—I think it a little harsh. To call it “futile” is almost acting as semantic stop sign itself for the term.
We need to take a little time to properly understand what is meant by emergence when used properly.
First that it is an observation rather than an explnation. But an observation with useful descriptive power since it observes that the phenomena under consideration is a process with properties whereby larger entities, patterns, and regularities arise through interactions among smaller or simpler entities that themselves do not exhibit such properties.
Therefore not at all properties that arise from interactions or combinations of smaller components are emergent (e.g. putting a whole bunch of magnets together just gives a larger magnetic field). So not all things arise are emergent.
So, while “emergence” is hardly an explanation—and one is obliged to look for the mechanisms that lead to the emergent behaviour - (such as how the polar hydrogen bonds in H20 give water surface tension—a property that a single H2O molecule does not exhibit) - nevertheless it’s use as an observation has power since it points us to look for (and ask question about) how properties which do not exist in the sub components come to be via the interactions of the components (often multi-factor) - and also to see if there are simple factors or descriptive rules than have predictive power (e.g. flocking phenomena of birds)
Hi Capla—no that is not what Godel’s theorem says (actually there are two incompleteness theorems)
1) Godel’s theorems don’t talk about what is knowable—only about what is (formally) provable in a mathematical or logic sense
2) The first incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an any sort of algorithm is capable of proving all truths about the relations of the natural numbers. In other words for any such system, there will always be statements about the natural numbers that are true, but that are unprovable within the system. The second incompleteness theorem, an extension of the first, shows that such a system cannot demonstrate its own consistency.
3) This doesn’t mean that some things can never be proven—although it provides some challenges—it does mean that we cannot create a consistent system (within itself) that can demonstrate or prove (algorithmically) all things that are true for that system
This creates some significant challenges for AI and consciousness—but perhaps not insurmountable ones.
For example—as far as i know—Godel’s theorem rests on classical logic. Quantum logic—where something can be both “true” and “not true” at the same time may provide some different outcomes
Regarding consciousness—I think I would agree with the thrust of this post—that we cannot yet fully explain or reproduce consciousness (hell we have trouble defining it) does not mean that it will forever be beyond reach. Consciousness is only mysterious because of our lack of knowledge of it
And we are learning more all the time
we are starting to unravel how some of the mechanisms by which consciousness emerges from the brain—since consciousness appears to be process phenomena rather rather than a physical property