Hikikomori no more? If so (as seems likely what with the girlfriend and all), it gladdens me to hear it.
Cyan
In the biz we call this selection bias. The most fun example of this is the tale of Abraham Wald and the Surviving Bombers.
I was working in protein structure prediction.
I confess to being a bit envious of this. My academic path after undergrad biochemistry took me elsewhere, alas.
Try it—the first three chapters are available online here. The first one is discursive and easy; the math of the second chapter is among of most difficult in the book and can be safely skimmed; if you can follow the third chapter (which is the first one to present extensive probability calculations per se) and you understand probability densities for continuous random variables then you’ll be able to understand the rest of the book without formal training.
The stated core goal of MIRI/the old SIAI is to develop friendly AI. With regards to that goal, the sequences are advertising.
Kinda… more specifically, a big part of what they are is an attempt at insurance against the possibility that there exists someone out there (probably young) with more innate potential for FAI research than EY himself possesses but who never finds out about FAI research at all.
Lumifer wrote, “Pretty much everyone does that almost all the time.” I just figured that given what we know of heuristics and biases, there exists a charitable interpretation of the assertion that makes it true. Since the meat of the matter was about deliberate subversion of a clear-eyed assessment of the evidence, I didn’t want to get into the weeds of exactly what Lumifer meant.
But we do run biological computations (assuming that the exercise of human intelligence reduces to computation) to make em technology possible.
Since we’re just bouncing short comments off each other at this point, I’m going to wrap up now with a summary of my current position as clarified through this discussion. The original comment posed a puzzle:
Brain emulations seem to represent an unusual possibility for an abrupt jump in technological capability, because we would basically be ‘stealing’ the technology rather than designing it from scratch. …If this is an unusual situation however, it seems strange that the other most salient route to superintelligence—artificial intelligence designed by humans—is also often expected to involve a discontinuous jump in capability, but for entirely different reasons.
The commonality is that both routes attack a critical aspect of the manifestation of intelligence. One goes straight for an understanding of the abstract computation that implements domain-general intelligence; the other goes at the “interpreter”, physics, that realizes that abstract computation.
Making intelligence-implementing computations substrate-independent in practice (rather than just in principle) already expands our capabilities—being able to run those computations in places pink goo can’t go and at speeds pink goo can’t manage is already a huge leap.
I’m just struck by how the issue of guilt here turns on mental processes inside someone’s mind and not at all on what actually happened in physical reality.
Mental processes inside someone’s mind actually happen in physical reality.
Just kidding; I know that’s not what you mean. My actual reply is that it seems manifestly obvious that a person in some set of circumstances that demand action can make decisions that careful and deliberate consideration would judge to be the best, or close to the best, possible in prior expectation under those circumstances, and yet the final outcome could be terrible. Conversely, that person might make decisions that that careful and deliberate consideration would judge to be terrible and foolish in prior expectation, and yet through uncontrollable happenstance the final outcome could be tolerable.
Because the solution has an immediate impact on the exercise of intelligence, I guess? I’m a little unclear on what other problems you have in mind.
That’s because we live in a world where… it’s not great, but better than speculating on other people’s psychological states.
I wanted to put something like this idea into my own response to Lumifer, but I couldn’t find the words. Thanks for expressing the idea so clearly and concisely.
I wasn’t talking about faster progress as such, just about a predictable single large discontinuity in our capabilities at the point in time when the em approach first bears fruit. It’s not a continual feedback, just an application of intelligence to the problem of making biological computations (including those that implement intelligence) run on simulated physics instead of the real thing.
I would say that I don’t do that, but then I’d pretty obviously be allowing the way I desire the world to be to influence my assessment of that actual state of the world. I’ll make a weaker claim—when I’m engaging conscious effort in trying to figure out how the world is and I notice myself doing it, I try to stop. Less Wrong, not Absolute Perfection.
Pretty much everyone does that almost all the time. So, is everyone blameworthy? Of course, if everyone is blameworthy then no one is.
That’s a pretty good example of the Fallacy of Gray right there.
Hmm.. let me think…
The materialist thesis implies that a biological computation can be split into two parts: (i) a specification of a brain-state; (ii) a set of rules for brain-state time evolution, i.e., physics. When biological computations run in base reality, brain-state maps to program state and physics is the interpreter, pushing brain-states through the abstract computation. Creating an em then becomes analogous to using Futamura’s first projection to build in the static part of the computation—physics—thereby making the resulting program substrate-independent. The entire process of creating a viable emulation strategy happens when we humans run a biological computation that (i) tells us what is necessary to create a substrate-independent brain-state spec and (ii) solves a lot of practical physics simulation problems, so that to generate an em, the brain-state spec is all we need. This is somewhat analogous to Futamura’s second projection: we take the ordered pair (biological computation, physics), run a particular biological computation on it, and get a brain-state-to-em compiler.
So intelligence is acting on itself indirectly through the fact that an “interpreter”, physics, is how reality manifests intelligence. We aim to specialize physics out of the process of running the biological computations that implement intelligence, and by necessity, we’re use a biological computation that implements intelligence to accomplish that goal.
It won’t have source code per se, but one can posit the existence of a halting oracle without generating an inconsistency.
My intuition—and it’s a Good one—is that the discontinuity is produced by intelligence acting to increase itself. It’s built into the structure of the thing acted upon that it will feed back to the thing doing the acting. (Not that unique an insight around these parts, eh?)
Okay, here’s a metaphor(?) to put some meat on the bones of this comment. Suppose you have an interpreter for some computer language and you have a program written in that language that implements partial evaluation. With just these tools, you can make the partial evaluator (i) act as a compiler, by running it on an interpreter and a program; (ii) build a compiler, by running it on itself and an interpreter; (iii) build a generic interpreter-to-compiler converter, by running it on itself and itself. So one piece of technology “telescopes” by acting on itself. These are the Three Projections of Doctor Futamura.
Fungible. The term is still current within economics, I believe. If something is fungible, it stands to reason that one can funge it, nu?
- Oct 1, 2014, 1:59 AM; 3 points) 's comment on The Puzzle of Faith and Belief by (
As Vaniver mentioned, it relates to exploring trade-offs among the various goals one has / things one values. A certain amount of it arises naturally in the planning of any complex project, but it seems like the deliberate practice of introspecting on how one’s goals decompose into subgoals and on how they might be traded off against one another to achieve a more satisfactory state of things is an idea that is novel, distinct, and conceptually intricate enough to deserve its own label.
Yeesh. These people shouldn’t let feelings or appearances influence their opinions of EY’s trustworthiness—or “morally repulsive” ideas like justifications for genocide. That’s why I feel it’s perfectly rational to dismiss their criticisms—that and the fact that there’s no evidence backing up their claims. How can there be? After all, as I explain here, Bayesian epistemology is central to LW-style rationality and related ideas like Friendly AI and effective altruism. Frankly, with the kind of muddle-headed thinking those haters display, they don’t really deserve the insights that LW provides.
There, that’s 8 out of 10 bullet points. I couldn’t get the “manipulation” one in because “something sinister” is underspecified; as to the “censorship” one, well, I didn’t want to mention the… thing… (ooh, meta! Gonna give myself partial credit for that one.)
Ab, V qba’g npghnyyl ubyq gur ivrjf V rkcerffrq nobir; vg’f whfg n wbxr.
FWIW, in my estimation your special-snowflake-nature is somewhere between “more than slightly, less than somewhat” and “potential world-beater”. Those are wide limits, but they exclude zero.