You win both of the bounties I precommitted to!
Optimization Process
Lovely! Yeah, that rhymes and scans well enough for me!
Here are my experiments; they’re pretty good, but I don’t count them as “reliably” scanning. So I think I’m gonna count this one as a win!
(I haven’t tried testing my chess prediction yet, but here it is on ASCII-art mazes.)
I found this lens very interesting!
Upon reflection, though, I begin to be skeptical that “selection” is any different from “reward.”
Consider the description of model-training:To motivate this, let’s view the above process not from the vantage point of the overall training loop but from the perspective of the model itself. For the purposes of demonstration, let’s assume the model is a conscious and coherent entity. From it’s perspective, the above process looks like:
Waking up with no memories in an environment.
Taking a bunch of actions.
Suddenly falling unconscious.
Waking up with no memories in an environment.
Taking a bunch of actions.
and so on.....
The model never “sees” the reward. Each time it wakes up in an environment, its cognition has been altered slightly such that it is more likely to take certain actions than it was before.
What distinguishes this from how my brain works? The above is pretty much exactly what happens to my brain every millisecond:
It wakes up in an environment, with no memories[1]; just a raw causal process mapping inputs to outputs.
It receives some inputs, and produces some outputs.
It’s replaced with a new version—almost identical to the old version, but with some synapse weights and activation states tweaked via simple, local operations.
It wakes up in an environment...
and so on...
Why say that I “see” reward, but the model doesn’t?
- ^
Is it cheating to say this? I don’t think so. Both I and GPT-3 saw the sentence “Paris is the capital of France” in the past; both of us had our synapse weights tweaked as a result; and now both of us can tell you the capital of France. If we’re saying that the model doesn’t “have memories,” then, I propose, neither do I.
I was trying to say that the move used to justify the coin flip is the same move that is rejected in other contexts
Ah, that’s the crucial bit I was missing! Thanks for spelling it out.
Reflectively stable agents are updateless. When they make an observation, they do not limit their caring as though all the possible worlds where their observation differs do not exist.
This is very surprising to me! Perhaps I misunderstand what you mean by “caring,” but: an agent who’s made one observation is utterly unable[1] to interact with the other possible-worlds where the observation differed; and it seems crazy[1] to choose your actions based on something they can’t affect; and “not choosing my actions based on X” is how I would define “not caring about X.”
- ^
Aside from “my decisions might be logically-correlated with decisions that agents in those worlds make (e.g. clone-prisoner’s-dilemma),” or “I am locked into certain decisions that a CDT agent would call suboptimal, because of a precommitment I made (e.g. Newcomb)” or other fancy decision-theoretic stuff. But that doesn’t seem relevant to Eliezer’s lever-coin-flip scenario you link to?
- ^
Ben Garfinkel: no bounty, sorry! It’s definitely arguing in a “capabilities research isn’t bad” direction, but it’s very specific and kind of in the weeds.
Barak & Edelman: I have very mixed feelings about this one, but… yeah, I think it’s bounty-worthy.
Kaj Sotala: solid. Bounty!
Drexler: Bounty!
Olah: hrrm, no bounty, I think: it argues that a particular sort of AI research is good, but seems to concede the point that pure capabilities research is bad. (“Doesn’t [interpretability improvement] speed up capabilities? Yes, it probably does—and Chris agrees that there’s a negative component to that—but he’s willing to bet that the positives outweigh the negatives.”)
Yeah, if you have a good enough mental index to pick out the relevant stuff, I’d happily take up to 3 new bounty-candidate links, even though I’ve mostly closed submissions! No pressure, though!
Thanks for the links!
Ben Garfinkel: sure, I’ll pay out for this!
Katja Grace: good stuff, but previously claimed by Lao Mein.
Scott Aaronson: I read this as a statement of conclusions, rather than an argument.
I paid a bounty for the Shard Theory link, but this particular comment… doesn’t do it for me. It’s not that I think it’s ill-reasoned, but it doesn’t trigger my “well-reasoned argument” sensor—it’s too… speculative? Something about it just misses me, in a way that I’m having trouble identifying. Sorry!
Yeah, I’ll pay a bounty for that!
Thanks for the collection! I wouldn’t be surprised if it links to something that tickles my sense of “high-status monkey presenting a cogent argument that AI progress is good,” but didn’t see any on a quick skim, and there are too many links to follow all of them; so, no bounty, sorry!
Respectable Person: check. Arguing against AI doomerism: check. Me subsequently thinking, “yeah, that seemed reasonable”: no check, so no bounty. Sorry!
It seems weaselly to refuse a bounty based on that very subjective criterion, so, to keep myself honest, I’ll post my reasoning publicly. His arguments are, roughly:
Intelligence is situational / human brains can’t pilot octopus bodies.
(“Smarter than a smallpox virus” is as meaningful as “smarter than a human”—and look what happened there.)
Environment affects how intelligent a given human ends up. ”...an AI with a superhuman brain, dropped into a human body in our modern world, would likely not develop greater capabilities than a smart contemporary human.”
(That’s not a relevant scenario, though! How about an AI merely as smart as I am, which can teleport through the internet, save/load snapshots of itself, and replicate endlessly as long as each instance can afford to keep a g4ad.16xlarge EC2 instance running?)
Human civilization is vastly more capable than individual humans. “When a scientist makes a breakthrough, the thought processes they are running in their brain are just a small part of the equation… Their own individual cognitive work may not be much more significant to the whole process than the work of a single transistor on a chip.”
(This argument does not distinguish between “ability to design self-replicating nanomachinery” and “ability to produce beautiful digital art.”)
Intelligences can’t design better intelligences. “This is a purely empirical statement: out of billions of human brains that have come and gone, none has done so. Clearly, the intelligence of a single human, over a single lifetime, cannot design intelligence, or else, over billions of trials, it would have already occurred.”
(This argument does not distinguish between “ability to design intelligence” and “ability to design weapons that can level cities”; neither had ever happened, until one did.)
The relevant section seems to be 26:00-32:00. In that section, I, uh… well, I perceive him as just projecting “doomerism is bad” vibes, rather than making an argument containing falsifiable assertions and logical inferences. No bounty!
Thanks for the links! Net bounty: $30. Sorry! Nearly all of them fail my admittedly-extremely-subjective “I subsequently think ‘yeah, that seemed well-reasoned’” criterion.
It seems weaselly to refuse a bounty based on that very subjective criterion, so, to keep myself honest / as a costly signal of having engaged, I’ll publicly post my reasoning on each. (Not posting in order to argue, but if you do convince me that I unfairly dismissed any of them, such that I should have originally awarded a bounty, I’ll pay triple.)
(Re-reading this, I notice that my “reasons things didn’t seem well-reasoned” tend to look like counterarguments, which isn’t always the core of it—it is sometimes, sadly, vibes-based. And, of course, I don’t think that if I have a counterargument then something isn’t well-reasoned—the counterarguments I list just feel so obvious that their omission feels glaring. Admittedly, it’s hard to tell what was obvious to me before I got into the AI-risk scene. But so it goes.)
In the order I read them:
No bounty: I didn’t wind up thinking this was well-reasoned.
It seems weaselly to refuse a bounty based on that very subjective criterion, so, to keep myself honest / as a costly signal of having engaged, I’ll post my reasoning publicly: (a) I read this as either disproving humans or dismissing their intelligence, since no system can build anything super-itself; and (b) though it’s probably technically correct that no AI can do anything I couldn’t do given enough time, time is really important, as your next link points out!
https://kk.org/thetechnium/the-myth-of-a-superhuman-ai/
No bounty! (Reasoning: I perceive several of the confidently-stated core points as very wrong. Examples: “‘smarter than humans’ is a meaningless concept”—so is ‘smarter than a smallpox virus,’ but look what happened there; “Dimensions of intelligence are not infinite … Why can’t we be at the maximum? Or maybe the limits are only a short distance away from us?”—compare me to John von Neumann! I am not near the maximum.)
No bounty! (Reasoning: the core argument seems to be on page 4: paraphrasing, “here are four ways an AI could become smarter; here’s why each of those is hard.” But two of those arguments are about “in the limit” with no argument we’re near that limit, and one argument is just “we would need to model the environment,” not actually a proof of difficulty. The ensuing claim that getting better at prediction is “prohibitively high” seems deeply unjustified to me.)
https://www.rudikershaw.com/articles/ai-doom-isnt-coming
No bounty! (Reasoning: the core argument seems to be that (a) there will be problems too hard for AI to solve (e.g. traveling-salesman). (Then there’s a rebuttal to a specific Moore’s-Law-focused argument.) But the existence of arbitrarily hard problems doesn’t distinguish between plankton, lizards, humans, or superintelligent FOOMy AIs; therefore (unless more work is done to make it distinguish) it clearly can’t rule out any of those possibilities without ruling out all of them.)
(It’s costly for me to identify my problems with these and to write clear concise summaries of my issues. Given that we’re 0 for 4 at this point, I’m going to skim the remainder more casually, on the prior that what tickles your sense of well-reasoned-ness doesn’t tickle mine.)
No bounty! (Reasoning: “Maybe any entity significantly smarter than a human being would be crippled by existential despair, or spend all its time in Buddha-like contemplation.” Again, compare me to von Neumann! Compare von Neumann to a von Neumann who can copy himself, save/load snapshots, and tinker with his own mental architecture! “Complex minds are likely to have complex motivations”—but instrumental convergence: step 1 of any plan is to take over the world if you think you can. I know I would.)
https://curi.us/blog/post/1336-the-only-thing-that-might-create-unfriendly-ai
No bounty! (Reasoning: has an alien-to-me model where AI safety is about hardcoding ethics into AIs.)
No bounty! (Reasoning: “Even if we did invent superhumanly intelligent robots, why would they want to enslave their masters or take over the world?” As above, step 1 is to take over the world. Also makes the “intelligence is multidimensional” / “intelligence can’t be infinite” points, which I describe above why they feel so unsatisfying.)
https://www.theregister.com/2015/03/19/andrew_ng_baidu_ai/
No bounty! Too short, and I can’t dig up the primary source.
Bounty! I haven’t read it all yet, but I’m willing to pay out based on what I’ve read, and on my favorable priors around Katja Grace’s stuff.
No bounty, sorry! I’ve already read it quite recently. (In fact, my question linked it as an example of the sort of thing that would win a bounty. So you show good taste!)
Thanks for the link!
Respectable Person: check. Arguing against AI doomerism: check. Me subsequently thinking, “yeah, that seemed reasonable”: no check, so no bounty. Sorry!
It seems weaselly to refuse a bounty based on that very subjective criterion, so, to keep myself honest, I’ll post my reasoning publicly. If I had to point at parts that seemed unreasonable, I’d choose (a) the comparison of [X-risk from superintelligent AIs] to [X-risk from bacteria] (intelligent adversaries seem obviously vastly more worrisome to me!) and (b) “why would I… want to have a system that wants to reproduce? …Those are bad things, don’t do that… regulate those.” (Everyone will not just!)
(I post these points not in order to argue about them, just as a costly signal of my having actually engaged intellectually.) (Though, I guess if you do want to argue about them, and you convince me that I was being unfairly dismissive, I’ll pay you, I dunno, triple?)
Hmm! Yeah, I guess this doesn’t match the letter of the specification. I’m going to pay out anyway, though, because it matches the “high-status monkey” and “well-reasoned” criteria so well and it at least has the right vibes, which are, regrettably, kind of what I’m after.
Nice. I haven’t read all of this yet, but I’ll pay out based on the first 1.5 sections alone.
Hey, folks! PSA: looks like there’s a 50% chance of rain today. Plan A is for it to not rain; plan B is to meet in the rain.
See you soon, I hope!