I remember reading a thread on Facebook, where Eliezer and Robin Hanson were discussing the implications of the Alpha Go (or Alpha Zero) on the content of the AI foom debate, and Robin made an analogy to Linear Regression as one thing that machines can do better than humans, but which doesn’t make them super-human.Does anyone remember what I’m talking about?
I remember someone (Paul Christiano, I think?) commenting somewhere on LessWrong, saying that Ian Goodfellow got the first GAN working the on the same day that he had the idea, with a link to an article.Does anyone happen to remember that comment, or have a link to that article?
Thank you for being thoughtful about how to serve the community’s needs!
Hello and welcome!
I felt much warmth reading your intro. I remember how magical LessWrong was for me when I first discovered it. (Now, almost a decade in, I have a different feeling towards it, but I remain deeply proud to participate in this community.)
All of which is to say that I feel vicarious excitement for the experiences you have ahead of you. I look forward to meeting you in person one day. : )
(The only troublesome side effect: school has become much less tolerable as a whole. I’m truly trying to get through it with top grades, but now that I see how much time I waste there, it’s much harder to try and be interesting in the actual material...)
I think this would not have helped me very much, so YMMV, but one frame you might want to consider is that of half-assing [school] with everything you’ve got.
[Eli’s personal notes. Feel free to comment or ignore.]
My summary of Eliezer’s overall view:
1. I don’t see how you can’t get cognition to “stack” like that, short of running a Turing machine made up of the agents in your system. But if you do that, then we throw alignment out the window.
2. There’s this strong X-and-only-X problem.
If our agents are perfect imitations of humans, then we do solve this problem. But having perfect imitations of humans is a very high bar that depends have a very powerful superintelligence already. And now we’re just passing the buck. How is that extremely powerful superintelligence aligned?
If our agents are not perfect imitations, it seems like have no guaranty of X-and-only-X.
This might still work, depending on the exact ways in which the imitation deviates from the subject, but most of the plausible ways seem like they don’t solve this problem.
And regardless, even if it only deviated in ways that we think are safe, we would want some guaranty of that fact.
This is a particularly helpful answer for me somehow. Thanks.
I think I might add one more: probability. For instance, “what are the base rates for people meeting good cofounders (in general, or in specific contexts)?” Knowing the answer to this might tell you how much you should make tradeoffs to optimize for working with possible cofounders.
Though, probably “risk” and “probability” should be one category.
Really? The plausibility ordering is “transplant to new body > become robot > revive old body”?
I would have guessed it would be “revive old body > transplant to new body > become robot”.
Am I missing something?
What seems ideal to me would be doing both: remove the head from the body, and then cryopreserve, and store, them both separately. This would give you the benefit of a faster perfusion of the brain and ease of transport in an emergency, but also keeps the rest of the body around on the off-chance that it contains personality-relevant info.
I might consider this “option” [Is this an option? As far as I know, no one has done this, so it would presumably be a special arrangement with Alcor.] when I am older and richer.
It seems worth noting that I have opted for neuropreservation instead of full body, at least at this time, in large part due to price difference. The “inclination to cryopreserve my full body” noted above, was not sufficient to sway my choice.
Fortunately, before the Coroner executed a search warrant, her head mysteriously disappeared from the Alcor facility. That gave Alcor the time to get a permanent injunction in the courts against autopsying her head.
Wow. Sounds like that was an exciting (and/or nerve-wracking) week at Alcor!
I probably do basic sanity checks moderately often, just to see if something makes sense in context. But that’s already intuition-level, almost.
If it isn’t too much trouble, can you give four more real examples of when you’ve done this? (They don’t need to be as detailed as your first one. A sentence describing the thing you were checking is fine.)
Last time I actually pulled an excel was when Taleb was against IQ and said its only use is to measure low IQ. I wanted to see if this could explain (very) large country differences. So I made a trivial model where you have parts of the population affected by various health issues that can drop the IQ by 10 points. And the answer was yes, if you actually have multiple causes and they stack up, you can end up with the incredibly low averages we see (in the 60s for some areas).
I’m glad that I asked the alternative phrasing of my question, because this anecdote is informative!
Can you be more specific? Presumably it was possible to open a spreadsheet when you were typing this answer, but I’m guessing that you didn’t?
and it’s very difficult to have [a general intelligence] below human-scale!
I would be surprised if this was true, because it would mean that the blind search process of evolution was able to create a close to maximally-efficient general intelligence.
Greg Cochran’s idea
Do you have a citation for this?
Perhaps even simpler: it is adaptive to have a sense of fairness because you don’t want to be the jerk. ’cuz then everyone will dislike you, oppose you, and not aid you.
The biggest, meanest, monkey doesn’t stay on top for very long, but a big, largely fair, monkey, does?
Why do people seem to mean different things by “I want the pie” and “It is right that I should get the pie”? Why are the two propositions argued in different ways?
I want to consider this question carefully.
My first answer is that arguing about morality is a political maneuver that is more likely to work for getting what you want than simply declaring your desires.
But that begs the question, why is it more likely to work? Why are other people, or non-sociopaths, swayed by moral arguments?
It seems like they, or their genes, must get something out of being swayed by moral arguments.
You might think that it is better coordination or something. But I don’t think that adds up. If everyone makes moral arguments insincerely, then the moral argument don’t actually add more coordination.
But remember that morality is enforced...?
Ok. Maybe the deal is that humans are loss averse. And they can project, in any given conflict, being in the weaker party’s shoes, and generalize the situation to other situations that they might be in. And so, any given onlooker would prefer norms that don’t hurt the looser too badly? And so, they would opt into a timeless contract where they would uphold a standard of “fairness”?
But also the contract is enforced.
I think this can maybe be said more simply? People have a sense rage at someone taking advantage of someone else iff they can project that they could be in the loser’s position?
And this makes sense if the “taking advantage” is likely to generalize. If the jerk is pretty likely to take advantage of you, then it might be adaptive to oppose the jerk in general?
For one thing, if you oppose the jerk when he bullies someone else, then that someone else is more likely to oppose him when he is bullying you.
Or maybe this can be even more simply reduced to a form of reciprocity? It’s adaptive to do favors for non-kin, iff they’re likely to do favors for you?
There’s a bit of bootstrapping problem there, but it doesn’t seem insurmountable.
I want to keep in mind that all of this is subject to scapegoating dynamics, where some group A coordinates to keep another group B down, because A and B can be clearly differentiated and therefore members of A don’t have to fear the bullying of other members of A.
This seems like it has actually happened, a bunch, in history. Whites and Blacks in American history is a particularly awful example that comes to mind.