I think I prefer, and should prefer, my smoothed out highs and lows. During a finite manipulation sequence of a galactic supercluster, whose rules I pre-established, I wouldn’t necessarily need to feel much—since that might feel like ‘a lot of pointless muscle straining’—other than a modest, homo sapiens-level positive reinforcement that it’s getting done. Consciousness, if I may also give my best guess, is only good for abstract senses (with and without extensions), and where these abstract senses seem concrete, even to an infinite precision, not “highs” and certainly not “lows” are necessary.
Abe: Why would a being that can create minds at will flinch at their annihilation? The absolute sanctity of minds, even before God, is the sentiment of modern western man, not a careful deduction based on an inconceivably superior intelligence.
Abe: I find it strange how atheists always feel able to speak for God.
If a lie is defined as the avoidance of truthfully satisfying interrogative sentences (this includes remaining silent), then it wouldn’t be honest, under request, to withhold details of a referent. But privacy depends on the existence of some unireferents, as opposed to none and to coreferents. If all privacy shouldn’t be abolished, then it isn’t clear that the benefits of honesty as an end in itself are underrated.
As it goes, how I’ve come to shut up and do the impossible: Philosophy and (pure) mathematics are, as activities a cognitive system engages in by taking more (than less) resources for granted, primarily for conceiving, perhaps continuous, destinations in the first place, where the intuitively impossible becomes possible; they’re secondarily for the destinations’ complement on the map, with its solution paths and everything else. While science and engineering are, as activities a cognitive system engages in by taking less (than more) resources for granted, primarily for the destinations’ complement on the map; they’re secondarily for conceiving destinations in the first place, as in, perhaps, getting the system to destinations where even better destinations can be conceived.
Because this understanding is how I’ve come to shut up and do the impossible, it’s somewhat disappointing when philosophy and pure mathematics get ridiculed. To ridicule them must be a relief.
Phil: [. . .] In such a world, how would anybody know if “you” had died?
Eliezer: That scream of horror and embarrassment is the sound that rationalists make when they level up. Sometimes I worry that I’m not leveling up as fast as I used to, and I don’t know if it’s because I’m finally getting the hang of things, or because the neurons in my brain are slowly dying.
Initially, I also thought this blog entry was faulty. But there indeed seems to be an important difference between having the goal do-A, and succeeding only when A, and having the goal try-A, and succeeding when only a finger (or a hyperactuator in my case) was lifted toward A.
rw: Everything is reality! Speaking of thoughts as if the “mental” is separate from the “physical” indicates implicit dualism.
I can’t recall ever affirming that the chance is negligible that religionists enter the AGI field. Not just recently, I began to anticipate they would be among the first encountered expressing that they act on one possibility that they are confined and sedated, even given a toy universe that is matryoshka dolls indefinitely all the way in and all the way out for them.
Tibba, the English grammar is correct. The idea is excruciatingly simple, so I don’t assume it’s extraordinary.
You’re probably trying to say something that should be considered seriously, but I’m having trouble disambiguating your post.
For some, there’s a not obviously wrong intuitive sense that not only might there be bad, deathly AIs to avoid but bad, more powerfully deterministic AIs to avoid. These latter kind would be so correct about everything in relation to its infra-AIs, like potentially some of us, that they would be indistinguishable from unpredictable puppeteers. For some, then, there must be little intellectual difference between wishy-washy thinking and having to agree with persons whose purposes appear to be nothing less than being, or at least being “a greater causal agent” of, the superior deterministic controllers, approaching reality, the ultimately unpredictable puppeteers.
If Truth is more important than anything else, an infra-AI’s own truth is all it would have. Hence, the problem.