if both players play (C, C) and then divide up the points evenly at the end, isn’t that sort of… well… communism?
Eliezer, you have just replaced Reeves’ substance with your own symbol. What’s your point here?
Untranslatable 2 is the thought sharing sex.
Sprite, you are, by definition, wrong.
Thirding D Franke’s idea.
Eliezer, a thought occurs. I’m sure the new setup will be great for everyone who wants to make sure they’re using the right priors and calculating the correct odds about whether to bet on Obama or not. Or indeed, trying their utmost to eliminate every source of bias from their life and turn into a giant lookup table or something. But I’ve much preferred reading your assorted ramblings on things like quantum mechanics, timeless physics, and especially low-level AI theory. Wrong motives? Meh, maybe. I’m sure the answer is ‘the elimination of bias and mind projection is the first step along the Way,’ and that’s fine, and I’m going to get involved. But I guess I just want to know whether or not you’ll be writing in the same vein as over the last couple of years, which have opened a huge number of doors for me and how I think.
Caledonian, I don’t think you realise just how much you do seem to look forward to that. If Eliezer’s so far beyond saving, what’s your rationale here?
Or indeed Marcello Herreshoff?
Exciting stuff. Looking forward to having the OB back catalogue sorted into sequences. That’ll make it much easier for me to badger everyone I know to get reading.
Rob—that’s because The Wire is more like real life than real life.
I can see the link to the Chronophone here Eliezer. What would Benny F have found most shocking about today? How can we extrapolate that forwards?
Surely the most scary changes will be in ethics and the way we think of the human condition and personal identity.
I’m currently most of the way through The Mind’s I, and if Hofstadter’s (very plausible) musings on identity are anything like accurate, we’re going to have to start thinking very differently about who we are, and even whether that question has any real application. My shocking prediction for 100, 500, 1000 years’ time? There won’t be any individuals, any notion of ‘I’.
The repercussions don’t really need spelling out or analysing here, and I’m not going to try and predict how things will work. That’s all I’ve got. Human history is a list of examples of our intuitions being exploded by our observations. Individual personal identity over time is an intuitive illusion, and one that’ll become increasingly transparent, and less useful, as time goes by. This scares the living hell out of me—I can’t think of any way I could possibly feel more out of place.
And I’m certainly not going to write any fiction set in that world.
Eliezer, does The Adaptive Stereo have an analogic application here?
To compress the idea, if you slowly turn down the strength of a signal into perception (in this case sound), you can make large, pleasant, periodic steps up without actually going anywhere. Or at least, you can go slower.
Any logical reason why this wouldn’t work for hedons? ‘Going digital’ might nullify this effect, but in that case we just wouldn’t do that, right?
Finally, I would dispute the notion that a periodic incremental increase in hedons flowing into us is how human pleasure works. The key notion here is surely not pleasure but payoff—continually getting something (even exponentially more of something) for nothing won’t feel like an orgasm getting better and better.* Unless you make some serious architectural changes. And, for me at least, that would be filed under ‘wirehead’. To continue the orgasm metaphor, if you could press a button (no puns please) and come, you’d quickly get bored. It might even turn you off actual sex.
The future scares me.
I know that we won’t necessarily get all these billions and trillions of hedons free—we would probably seek to set up some carrots and sticks of our own etc. But still. It’d be tough not to just plug yourself in given the option. Like you say though, easier to poke holes than to propose solutions. Will ponder on.
Marcello, very well put.
*This is my intuition talking, but surely that’s what we’re running on here?
[...]my experience of drugs is as nonexistent as my experience of torture.
There’s something imbalanced about that.
Agreed. I’m sure both can be procured somewhere in the Bay Area though. Great material for blogging too!
Is the equivalent pleasure one that overrides everything with the demand to continue and repeat it?
Yes. And that’s as horrible an idea as eternal torture. I’m surprised you haven’t cited any of the studies about relative happiness of lottery winners, (compared to their expectations) though I seem to remember references in some of the posts about a year back.
Being able to change the rules of the game is dangerous. Being able to change your brain so you perceive the game differently is dangerous. Achieving the capability to do both within a short time window is my favourite candidate for a Great Filter.
Thus fails the Utopia of playing lots of really cool video games forever.
Not convinced. Based on my experience of what people are like; from the moment where games are immersive enough, and we have the technology to plug in for good, the problem of ‘no lasting consequences’ will vanish for people who want it to. There are already plenty of people willing to plug into WoW for arbitrary amounts of time if they are able. Small increases in immersiveness and catheter technology will lead to large increases in uptime.
phane touches on something interesting just above. One shouldn’t talk about video games or even VR as a special case; one should talk about non-magical-boundary sensory input and our reactions. I’m fully in agreement that you should YANK OUT THE WIRES, but I’m having trouble generalizing exactly why. Something to do with ‘the more real your achievements the better’? Doesn’t feel right. If this has come up implicitly in the Fun Theory series, apologies for not having spotted it.
Also, seconding Peter dB. Saying ‘that reminds me of an episode where...’ doesn’t deserve a ticking-off, particularly following such posts as ‘Use the try harder, Luke’. In fact, it can actually be useful to ground things when thinking anstractly, as long as you take care not to follow the fictional logic.
Hey Rick Astley! Much better than this decision theory crap.
Came across this at work yesterday, which isn’t unrelated. For every level of abstraction involved in a decision, or extra option added, I guess we should just accept that 50% of the population will fall by the wayside. Or start teaching decision theory in little school.
Happy Nondenominational Winter Holiday Period, all. Keep it rational.
I have included in the envelope a means of identifying myself when I claim the money, so that it cannot be claimed by someone impersonating me.
Doesn’t that technically make you now Known?
Also, how much time has to pass between an AI ‘coming to’ and the world ending? What constitutes an AI for this bet?
Eliezer, will you be donating the $10 to the Institute? If so, does this constitute using the wager to shift the odds in your favour, however slightly?
Yes, the last two are jokes. But the first two are serious.
Anonymous, that reminds me of some anecdote by Feynman where he has complex mathematical ideas described to him by young students. He wouldn’t fully understand them, but he would imagine a shape, and for each new concept he’d add an extra bit, like a squiggly tail or other appendage. When something didn’t fit in right, it would be instantly obvious to him, even if he couldn’t explain exactly why.
Improvised sensory modality for maths?
And note that Eliezer never answered your question, namely, if you can modify yourself so that you never get bored, do you care about or need to have fun?
Richard, probably you wouldn’t care or need to have fun. But why would you do that? Modifying yourself that way would just demonstrate that you value the means of fun more than the ends. Even if you could make that modification, would you?
How odd, I just finished reading The State of the Art yesterday. And even stranger, I thought ‘Theory of Fun’ while reading it. Also, nowhere near the first time that something I’ve been reading has come up here in a short timeframe. Need to spend less time on this blog!
Trying to anticipate the next few posts without reading:
Any Theory of Fun will have to focus on that elusive magical barrier that distinguishes what we do from what Orgasmium does. Why should it be that we place a different on earning fun from simply mainlining it? The intuitive answer is that ‘fun’ is the synthesis of endeavour and payoff. Fun is what our brains do when we are rewarded for effort. The more economical and elegant the effort we put in for higher rewards the better. It’s more fun to play Guitar Hero when you’re good at it, right?
But it can’t just be about ratio of effort to reward, since orgasmium has an infinite ratio in this sense. So we want to put in a quantity of elegant, efficient effort, and get back a requisite reward. Still lots of taboo-able terms in there, but I’ll think further on this.
Phil, what Vlad and Nick said. I’ve no doubt we won’t look much like this in 100 years, but it’s still humanity and its heritage shaping the future. Go extinct and you ain’t shaping nothing. This isn’t a magical boundary, it’s a pretty well-defined one.
‘Precise steering’ in your sense has never existed historically, yet we exist in a non-null state.
Aron, Robin, we’re only just entering the phase during which we can steer things to either a really bad or really good place. Only thinking in the short term, even if you’re not confident in your predictions, is pretty irresponsible when you consider what our relative capabilities might be in 25, 50, 100 years.
There’s absolutely no guarantee that humanity won’t go the way of the neanderthal in the grand scheme of things. They probably ‘thought’ of themselves as doing just fine, extrapolating a nice stable future of hunting, gathering, procreating etc.
Marcello, have a go at writing a post for this site, I’d be really interested to read some of your extended thoughts on this sort of thing.
in a Big World, I don’t have to worry as much about creating diversity or giving possibilities a chance to exist, relative to how much I worry about average quality of life for sentients.
Can’t say fairer than that.
Eliezer, given the proportion of your selves that get run over every day, have you stopped crossing the road? Leaving the house?
Or do you just make sure that you improve the standard of living for everyone in your Hubble Sphere by a certain number of utilons and call it a good day on average?
design cycles have stayed about the same length while chips have gotten hundreds of times more complex, and also much faster, both of which soak up computing power.
So...if you use chip x to simulate its successor chip y, and chip y to simulate its successor, chip z, the complexity and speed progressions both scale at exactly the right ratio to keep simulation times roughly constant? Interesting stuff.
Sounds as though the introduction of black-box 2015 chips would lead to a small bump and level off quite quickly, short of a few huge insights, which Jed seems to suggest are quite rare. Eliezer, is this another veiled suggestion that hardware is not what we need to be working on if we’re looking to FOOM?
Changes to software that involve revising pervasive assumptions have always been difficult, of course.
Welcome to Overcoming Bias.
Also, while economists have many abstractions for modeling details of labor teams and labor markets, our standard is that the simplest versions should be of just a single aggregate quantity of labor.
Granted, but as long as we can assume that things like numbers of workers, hours worked and level of training won’t drop through the floor, then brain emulation or uploading should naturally lead to productivity going through the roof shouldn’t it?
Or is that just a wild abstraction with no corroborating features whatsoever?
because our computing hardware has run so far ahead of AI theory, we have incredibly fast computers we don’t know how to use for thinking; getting AI right could produce a huge, discontinuous jolt, as the speed of high-grade thought on this planet suddenly dropped into computer time.
Now there’s a scary thought.
Right, that’s it, I’m gonna start cooking up some nitroglycerin and book my Eurostar ticket tonight. Who’s with me?
I dread to think of the proportion of my selves that have already suffered horrible gravitational death.