Baseline of my opinion on LW topics

To avoid repeatedly saying the same, I’d like to state my opinion on a few topics I expect to be relevant to my future posts here.

You can take it as a baseline or reference for these topics. I do not plan to go into any detail here, and I will not state all my reasons or sources. You may ask for separate posts if you are interested. This is only to provide a context for my comments and posts elsewhere.

If you google me, you may find some of my old (but not that off the mark) posts about these positions, e.g., here:

http://​​grault.net/​​adjunct/​​index.cgi?GunnarZarncke/​​MyWorldView

Now my position on LW topics.

The Simulation Argument and The Great Filter

On The Simulation Argument, I go for

“(1) the human species is very likely to go extinct before reaching a “posthuman” stage.”

Correspondingly on The Great Filter, I go for failure to reach

“9. Colonization explosion”.

This is not because I think humanity will self-annihilate soon (though this is a possibility). Instead, I hope that humanity will earlier or later come to terms with its planet. My utopia could be like The Pacifists (a short story in Analog 5).

Why? Because of essential complexity limits.

This falls into the same range as “It is too expensive to spread physically throughout the galaxy.” I know that negative proofs about engineering are notoriously wrong—but that is my best guess. Simplified, one could say that the low-hanging fruits have been taken, and I have some empirical evidence on multiple levels to support this view.

Correspondingly there is no singularity because progress is not limited by raw thinking speed but by effective aggregate thinking speed and physical feedback.

What could prove me wrong?

If a serious discussion would ruin my well-prepared arguments and evidence to shreds (quite possible).

At the very high end, a singularity might be possible if one could find a way to simulate physics faster than physics itself (trading physical space for time).

UPDATE 2022-08-30:

If you had asked me about my confidence levels above, I would have said something like 80% that no colonization explode would be possible. And 90% that space colonization would be slower than 0.5c.

Since then I have updated toward a) less compute needed to implement AI, b) more compute available (Moore stretching farther), and c) more technical progress possible than expected (popular example: Elon Musk’s companies).

I would now say that I am only 50% confident that space colonization is not possible.

AI

Artificial intelligence and artificial emotion are plausibly possible to me. Philosophical note: I don’t care on what substrate my consciousness runs, and maybe I am simulated.

I think strong AI is possible and maybe not that far away.

But I also don’t think this will bring the singularity because of the complexity limits mentioned above. Strong AI will speed up some cognitive tasks with compound interest—but only until the physical feedback level is reached. Or a social feedback level is reached if AI should be designed to be so.

One temporary dystopia that I see is that cognitive tasks are outsourced to AI, and a new round of unemployment drives humans into depression.

I have studied artificial intelligence and played around with two models a long time ago:

  1. A simplified layered model of the brain; deep learning applied to free inputs (I canceled this when it became clear that it was too simple and low level and thus computationally inefficient)

  2. A nested semantic graph approach with the propagation of symbol patterns representing thought (only concept; not realized)

I’d like to try a ‘synthesis’ of these where microstructure-of-cognition like activation patterns of multiple deep learning networks are combined with a specialized language and pragmatics structure acquisition model a la Unsupervised learning of natural languages. See my opinion on cognition below for more in this line.

What could prove me wrong?

On the low success end, if it takes longer than I think.

On the high end, if I’m wrong with the complexity limits mentioned above.

UPDATE 2022-08-30:

I still think that AGI is possible and not that far away. In fact, my Metaculus prediction is earlier than consensus, and I predict less compute than consensus. I have updated toward shorter timelines—and more dangerous timelines. That is also why I have started a project to do something about it. But I still think people underestimate the complexity not of building AGI but of it being easily making sense of all the complex world and gaining power. Thus while AGI may be soon, I guess the first time the economy doubles in two years is much farther out. All assuming the AGI doesn’t kill us.

Conquering space

Humanity might succeed at leaving the planet but at high costs.

By leaving the planet, I mean permanently independent of Earth but not necessarily leaving the solar system any time soon (speculating on that is beyond my confidence interval).

I think it is more likely that life leaves the planet—that can be

  1. artificial intelligence with a robotic body—think of curiosity rover 2.0 (most likely).

  2. intelligent life forms bred for life in space—think of Magpies, those are already smart, small, reproducing fast, and have 3D navigation.

  3. actual humans in a suitable protective environment with small autonomous biospheres harvesting asteroids or mars.

  4. ‘cyborgs’ - humans altered or bred to better deal with problems in space like radiation and missing gravity.

  5. other—including misc ideas from science fiction (least likely or latest).

For most of these (esp. those depending on breeding), I’d estimate a time range of a few thousand years (except for the magpies).

What could prove me wrong?

If I’m wrong on the singularity aspect too.

If I’m wrong on the timeline, I will be long dead, likely in any case except (1), which I expect to see in my lifetime.

UPDATE 2022-08-30:

I think magpies could be bred for space even faster with modern genetic tech—but nobody seems to work on it.

Cognitive Base of Rationality, Vagueness, Foundations of Math

How can we as humans create meaning out of noise?

How can we know truth? How does it come that we know that ‘snow is white’ when snow is white?

Cognitive neuroscience and artificial learning seem to point toward two aspects:

Fuzzy learning aspect

Correlated patterns of internal and external perception are recognized (detected) via multiple specialized layered neural nets (basically). This yields qualia like ‘spoon’, ‘fear’, ‘running’, ‘hot’, ‘near’, ‘I’. These are symbols, but they are vague with respect to meaning because they result from a recognition process that optimizes for matching, not correctness or uniqueness.

Semantic learning aspect

Upon the qualia builds the semantic part, which takes the qualia and, instead of acting directly on them (as is the normal effect for animals), finds patterns in their activation which is not related to immediate perception or action but at most to memory. These may form new qualia/​symbols.

The use of these patterns is that the patterns allow capturing concepts that are detached from reality (detached in so far as they do not need a stimulus connected in any way to perception).

Concepts like (‘cry-sound’ ‘fear’) or (‘digitalis’ ‘time-forward’ ‘heartache’) or (‘snow’ ‘white’) or—and that is probably the demain of humans: ((‘one’ ‘successor’) ‘two’) or ((‘I’ ‘happy’) (‘I’ ‘think’)).

Concepts

Learning works on these concepts like on the normal neuronal nets too. Thus concepts reinforced by positive feedback will stabilize, and mutually with them, the qualia they derive from (if any) will also stabilize.

For certain pure concepts, the usability of the concept hinges not on any external factor (like “how does this help me survive”) but on social feedback about structure and the process of the formation of the concepts themselves.

And this is where we arrive at such concepts as ‘truth’ or ‘proposition.’

These are no longer vague—but not because they are represented differently in the brain than other concepts but because they stabilize toward maximized validity (that is, stability due to the absence of external factors, possibly with a speed-up due to social pressure to stabilize). I have written elsewhere that everything that derives its utility not from external use but internal consistency could be called math.

And that is why math is so hard for some: If you never gained a sufficient core of self-consistent stabilized concepts and/​or the usefulness doesn’t derive from internal consistency, but from external (“teachers password”) usefulness, then it will just not scale to more concepts (and the reason why science works at all is that science values internal consistency so high and there is little more dangerous to science that allowing other incentives).

I hope that this all makes sense. I haven’t summarized this for quite some time.

A few random links that may provide some context:

http://​​www.blutner.de/​​NeuralNets/​​ (this is about the AI context we are talking about)

http://​​www.blutner.de/​​NeuralNets/​​Texts/​​mod_comp_by_dyn_bin_synf.pdf (research applicable to the above in particular)

http://​​c2.com/​​cgi/​​wiki?LeibnizianDefinitionOfConsciousness (funny description of levels of consciousness)

http://​​c2.com/​​cgi/​​wiki?FuzzyAndSymbolicLearning (old post by me)

http://​​grault.net/​​adjunct/​​index.cgi?VaguesDependingOnVagues (dito)

Note: Details about the modeling of the semantic part are mostly in my head.

What could prove me wrong?

Well. Wrong is too hard here. This is just my model and it is not really that concrete. A longer discussion with someone more experienced with AI than I am (and there should be many here) might suffice to rip this apart (provided that I’d find time to prepare my model suitably).

UPDATE: Most of the above were hinting at things that have by now become common knowledge. On LW, I’m very happy with the Brain-Like-AGI Safety sequence that spells out things for which I had only a vague grasp before.

God and Religion

I wasn’t indoctrinated as a child. My truly loving mother is a baptized Christian living it and not being sanctimonious. She always hoped that I would receive my epiphany. My father has a scientifically influenced personal Christian belief.

I can imagine a God consistent with science on the one hand and on the other hand with free will, soul, afterlife, trinity, and the bible (understood as a mix of non-literal word of God and history tale).

It is not that hard if you can imagine (the simulation of) a timeless universe. If you are god and have whatever plan on Earth but empathize with your creations, then it is not hard to add a few more constraints to certain aggregates called existences or ‘person lives.’ Constraints that realize free will in the sense of ‘not subject to the whole universe plan satisfaction algorithm.’ Surely not more difficult than consistent time travel.

And souls and the afterlife should be easy to envision for any science fiction reader familiar with superintelligences. But why? Occams razor applies.

There could be a God. And his promise could be real. And it could be a story seeded by an empathizing God—but also a ‘human’ God with his own inconsistencies and moods.

But it also could be that this is all a fairy tale run amok in human brains searching for explanations where there are none. A mass delusion. A fixated meme.

Which is right? It is difficult to put probabilities to stories. I see that I have slowly moved from 5050 agnosticism to tolerant atheism.

I can’t say that I wait for my epiphany. I know too well that my brain will happily find patterns when I let it. But I have encouraged to pray for me.

My epiphanies—the aha feelings of clarity that I did experience—have all been about deeply connected patterns building on other such patterns building on reliable facts mostly scientific in nature.

But I haven’t lost my morality. It has deepened and widened, and I have become even more tolerant (I hope).

So if God does, against all odds, exist, I hope he will understand my doubts, weigh my good deeds and forgive me. You could tag me godless Christian.

What could prove me wrong?

On the atheist side, I could be moved further by more evidence of religion being a human artifact.

On the theist side, there are two possible avenues:

  1. If I’d have an unsearched for an epiphany—a real one where I can’t say I was hallucinating but, e.g., a major consistent insight or a proof of God.

  2. If I’d be convinced that the singularity is possible. This is because I’d need to update toward being in a simulation as per Simulation argument option 3. That’s because then the next likely explanation for all this god business is actually some imperfect being running the simulation.

Thus I’d like to close with this corollary to the simulation argument:

Arguments for the singularity are also (weak) arguments for theism.

UPDATE 2022-08-30:


Note: I am aware that this long post of controversial opinions unsupported by evidence (in this post) is bound to draw flak. That is the reason I post it in Comments, lest my small karma is lost completely. I have to repeat that this is meant as context and that I want to elaborate on these points on LW in due time with more and better-organized evidence.

Edited: Fixed more typos.