I am (being new here) working through the sequences. I do this because I’ve immediately recognized upon reading many of the responses here that I lack the internal machinery, and certainly the language, to even foster discussion of what I do and believe, and why. The goal would be to get through all of the major and minor sequences, which seems like quite the low bar, except that I work a reasonable amount of hours throughout the week, and suffer a good deal of lethargy over not spending enough mental effort on reading new things already. On that note, the major portion of this project is recreating the ability to focus on material for longer periods of time without feeling like my brain needs to shut off after using some of its energy for work activities. While work is, intellectually, hard work it is of a tried and old sort for me that allows for lethargy without a decrease in quality, a feature I would expect of most jobs.
CoffeeStain
Here’s a question perhaps not posed too often. I’m new here, and finding the sheer amount of effort people seem to put into the status quo topics quite daunting. I recognize the objective value in many of the things discussed, in that by discussing them there is moral benefit. But day-to-day, I find myself envious rather than spurred to action that people are able to put forth the effort.
For far too long, I’ve been frustrated and in a mental lull, and I find little to attribute this to except a decrease in effort about the things I know I am capable of caring about. Being part of a community of like-minded individuals seems like the thing that would help me get out of this, but this in itself requires the sort of effort I only feel lucky to have currently as I make this attempt to interact with one.
Any advice for getting out of the rut?
Done, with apprehension. To be honest, the mildly altered state meditation stuff kind of weirds me out, which is hardly a predictor of its potential efficacy. To be more honest, my religious upbringing, to which I still often have a little allegiance (a discussion, and a long one, for many another time), suggests an argument about not looking too inwardly for answers to your innermost hurts. But the real or imagined force of that argument is not the source of my apprehension, so I best ignore it.
I may wish to find a local therapist that is paid for by my health insurance. How does one walk into a general practitioner’s office and ask for a therapist? What sort of therapy am I looking for?
Hey everyone,
As I continue to work through the sequences, I’ve decided to go ahead and join the forums here. A lot of the rationality material isn’t conceptually new to me, although much of the language is very much so, and thus far I’ve found it to be exceptionally helpful to my thinking.
I’m a 24 year old video game developer, having worked on graphics on a particular big-name franchise for a couple years now. It’s quite the interesting job, and is definitely one of the realms I find the heady, abstract rationality tools to be extremely helpful. Rationality is what it is, and that seems to be acknowledged here, a fact I’m quite grateful for.
When I’m not discussing the down-to-earth topics here, people may find I have a sometimes anxiety-ridden attachment to certain religious ideas. Religious discussion has been extremely normal for me throughout my life, so while the discussion doesn’t make me uncomfortable, my inability to come to answers that I’m happy with does, and has caused me a bit of turmoil outside of discussion. Obviously there is much to say about this, and much people may like to say to me, but I’d like to first get through all the sequences, get all of my questions about it all answered, pay attention a bit to the discussions here, and I’ll go from there. I have no grand hopes to finally put these beliefs to rest, but I will go to lengths to see whether it is something I should do. To pick either seems to me to suppose I have a Way to rationality, if I understand the point correctly. I would invite any and all discussion on the topic, and I appreciate the little “welcome to Theists” in the main post here. :)
See you all around.
From what I can see, people probably thought you were belaboring a point which was not a part of the discussion at hand. You said you were answering the moral value of “there exists 3^^^3 people AND...” versus the situation without that prefix, but people discussing it did not take that interpretation of the problem, nor did Eliezer when he asked it. You might say that to determine the value of 3^^^3 people getting specks in their eye you would have to presuppose it included the value of them existing, but nobody was discussing that as if it were part of the problem. It sucks, yeah, but the way that people prefer to have discussions wins out, and you can but prefer it or not, or persuade in the right channels. A good lesson to learn, and don’t be discouraged.
Seattle, although I will of course be in San Francisco for GDC in late March.
I’m not yet sure what my goals are, although savings is a big one. If I knew I was going to be in the game, and even software industry forever, it might not be, but I’ve always wondered if I might not want to have the money to allow me to feel secure in later supporting myself if I chose something new that payed very little.
Admission: I haven’t read Harry Potter, but I’m told it’s not a major prerequisite.
Does induction state a fact about the territory or the map? Is it more akin to “The information processing influencing my sensory inputs actually has to a processor in which P(0) & [P(0) & P (1) & … & P(n) → P(n+1)] for all propositions P and natural n?” Or is it “my own information processor is one for which P(0) & [P(0) & P (1) & … & P(n) → P(n+1)] for all propositions P and natural n?”
It seems like the second option is true by definition (by the authoring of the AI, we simply make it so because we suppose that is the way to author an AI to map territories). This supposition itself would be more like the first option.
I’m guessing I’m probably just confused here. Feel free to dissolve the question.
So I have. Mathematical induction is, so I see, actually a form of deductive reasoning because its conclusions necessarily follow from its premises.
How should I think about the terminologies “faith” and “axiom” in this context? Is this “faith in two things” more fundamental than belief in some or all mathematical axioms?
For example, if I understand correctly, mathematical induction is equivalent to the well-ordering principle (pertaining to subsets of the natural numbers, which have a quite low ordinal). Does this mean that this axiom is subsumed by the second faith, which deals with the well-ordering of a single much higher ordinal?
Or, as above, did Eliezer mean “well-founded?” In which case, is he taking well-ordering as an axiom to prove that his faiths are enough to believe all that is worth believing?
It may be better to just point me to resources to read up on here than to answer my questions. I suspect I may still be missing the mark.
I’m having a motivation block that I’m not sure how to get around. Basically whenever I think about performing an intellectual activity, I have a sudden negative reaction that I’m incapable of succeeding. This heavily lowers my expectation that doing these activities will pay off, most destructively so the intellectual activity of figuring out why I have these negative reactions.
In particular, I worry about my memory. I feel like it’s slipping from what it used to be, and I’m only 24. It’s like, if only I could keep the details of the memory tricks in my head long enough I might be able to improve it. :) Only partially kidding.
In short, it takes a lot of effort for me to feel like I’m succeeding at succeeding. And I don’t know why.
Many thanks. My memory issue certainly isn’t any sort of disorder, and indeed the sort of success I’d like to have with it are of a high level. There has been a decline in the last few years of my (formerly exceptional) abilities here, and I need to find ways to increase my attention to it as a graspable and controllable challenge/problem.
Generally my ability to deal with attention, focus, and memory issues correlates to my day-to-day mood and self-confidence. I’ve found a coach through the community here to help me find ways to combat these slightly more fundamental issues. It is good, though, to see the wide variety of talk here about improving focus, overcoming “Ugh fields,” and the like.
Fundamentally, my issue is one of keeping a particular skill in practice, and so I appreciate your practical suggestions. University offers an environment that more constantly practices skills such as learning, remembering, and new-paradigm thinking. My work environment offered similar challenges for a year or so, but I’ve since gained an expertise that is more valuable to use than to grow.
Today I gave a presentation to a group of 50 software developers in my company, and I was pleasantly surprised at my abilities. Apparently all of my on-the-fly speaking skills (which I had presumed dead since school) were just latent, if out of practice until the adrenaline kicked it back online. This was in no small part due, I suppose, than some mental tricks I’ve learned here into convincing myself of my future success, based on previous successes.
Just typing for my own benefit now. Thank you very much for your advice!
Here is yet another question to help me reveal my misunderstandings:
So, according to decoherence, a human believing that a quantum event with probability 50% occurs is equivalent to a “version” of a human brain becoming coupled with the amplitude blob corresponding to that 50%. The seeming complication that yet another “version” of a human brain is coupled with the other 50% of the amplitude distribution is all in our heads; the quantum physics giving rise to this “complication” is quite simple.
How about this experiment then? I set up an event that I know has two blobs A and B, each corresponding to 50% probabilities. I also set up, on the side, a two-slit experiment. I agree to myself beforehand that no matter the outcome of the event, I will cover one of the slits in my side experiment. As expected, no interference pattern occurs on the film.
Next, I do a similar experiment. This time, I only cover a slit on outcome A. If I find myself the version that observes outcome A, will I find 50% of an interference pattern caused by the amplitude distribution in the version of the world caused by outcome B, and importantly, the version of myself in outcome B that fails to over the slit?
If there is something wrong with this setup, might there not be another similar way to prove that other worlds exist?
So I’m running through the Quantum Mechanics sequence, and am about 2⁄3 of the way through. Wanted to check in here to ask a few questions, and see if there aren’t some hidden gotchas from people knowledgeable about the subject who have also read the sequence.
My biggest hangup so far has been understanding when it is that different quantum configurations sum, versus when they don’t. All of the experiments from the earlier posts (such as distinct configurations) seem to indicate that configurations sum when they are in the “same” time and place. Eliezer indicates at some point that this is “smeared” in some sense, perhaps due to the fact that all particles are smeared in space in time; therefore if two “particles” in different worlds don’t arrive at the same place at exactly the same time, the smearing will cause the tail end of their amplitude distributions to still interact, resulting in a less perfect collision with somewhat partial results to what would have happened in the perfect experiment.
The hangup becomes an issue, barring any of my own misunderstanding (which is of course likely), when he starts talking about macroscopic other worlds. He goes so far as to say that when a quantum event is “observed,” what really happens is that different versions of the experimenter become decohered with the various potential states of the particle.
Several things don’t seem quite right here. First, Eliezer seems to imply here that brains only work (to the extent that they can have beliefs capable of being acted on) when they work digitally, with at least some neurons having definite on or off states. What happens to the conservation of probability volume due to Liouville’s Theorem described in Classical Configuration Spaces? Or maybe I misunderstand here, and the probability volumes actually do become sharply concentrated in two positions. But then why is it not possible for probability volumes to become usually or always sharply concentrated in one position, giving us, for all practical purposes, a single world?
Backing up a bit though. What keeps different worlds from interacting? Eliezer implies in Decoherence that one important reason that decohered particles are such is a separation in space. What I fail to understand, if there is not some specified other axis, is why the claim stands that different but similar worlds (different only along that axis) fail to interact! According to his interpretation (or my interpretation of his interpretation) of quantum entanglement, your observation of a polarized particle at one end of a light-year limits the versions of your friend (who observed the tangled particle) that you are capable of meeting when you compare notes in the middle. But why do you just as easily not meet any other version of your friend? What is the invisible axis besides space and time that decoheres worlds, if we meet at the same place and time no matter what we observe?
More importantly, what keeps neurons which are at the same space and time from interacting with their other-world counterparts, as if they were as real as their this-world self?
Unless I’m completely off here, couldn’t there be many fewer possible worlds than Eliezer suggests? In extremely controlled experiments, we observe decoherence on rather macroscopic levels, but isn’t “controlled” precisely the point? In most normal quantum interactions, isn’t there always going to be interference between worlds? And what if that interference by the nature of the fundamental laws just so happens to have some property (maybe a sort of race condition) that causes, usually, microscopic other worlds to merge? On average, if possible worlds become macroscopic enough, still-real interactions between the worlds become increasingly likely, and they are no longer “other worlds” but actually-interacting same-world, to the point where no two differently configured sets of neurons could ever observe differently.
I should stop here before I carry on any early-introduced fallacy to increasingly absurd conclusions. Would be very interested in how to resolve my confusion here.
They certainly don’t work only digitally, but the suggestion seems to be that for most brain states at the level of “belief” it is required that at least some neurons have definite states, if only in the sense of “neuron A is firing at some definite analog value.”
Interestingly, the most influential prior, that made me change my prediction from the incorrect to correct answer was the existence of this post. Had I stumbled across the video without expectations, I would have put little effort to fix my incorrect intuition of “flies off in every direction.” But reading the post, particularly the comment that I HAVE to watch the video, suggested that the actual result was interesting and perhaps in some way nonintuitive.
What’s interesting about the Fermi Estimate post is that its examples encourage you look for predictors that are unexpectedly reliable, rather than those that first jump to mind.
That I haven’t heard of many plane crashes in the past decade, this sounds like something I might hear on a post arguing an opposite point. “Sure, you’ve haven’t heard of plane crashes in a decade, but why suspect that reliable predictors are in the neighborhood of your daily activities rather than your knowledge about the world? And now I will eye you knowingly until you learn something of your cognitive biases!”
Although, perhaps there aren’t any estimated-statistics approaches that would be any good here, that wouldn’t rely on other more incidental bits of information you happen to possess. Sure, you could try to list the top causes of flight disasters (human error and mechanical malfunction?) and estimate the likelihood these things occur, and also estimate how many of these flights result in large scale deaths. But there may be too many variables; either that, or I have ways to go in making Fermi estimates. In any case, it would be hard to incorporate the time variation of flight disaster outcomes. For all I know, flight safety has skyrocketed in the past decade due to widespread process improvement resulting from studying past disasters. And how could I ever predict that effectiveness, or predict how long it would take to come about?
It is thus claimed either that we could not know that a prospective machine was a hypermachine after witnessing finitely many computations or that it simply would not be a hypermachine since its behaviour could be simulated by a Turing machine. Hypercomputation is thus claimed to be on shaky ground.
The former suggestion seems like the more important point here. While true that the hypercomputer’s behavior can be simulated on a Turing machine, this is only true of the single answer given and received, not of the abstractly defined problem being solved. The hypercomputer still cannot be proven by a Turing machine to have knowledge of a fact that a Turing machine could not.
And so the words “shaky ground” are used loosely here. The argument doesn’t refute the theoretic “existence” of recursive computation any more than it refutes the existence of theoretic hypercomputation. That finite state machines are the only realizable form of Turing machine is hardly a point in their generalized disfavor.
I find I have heavily skewed ratios for the various activities in my life, which I could explain away as me just being specialized, although they could still use some work. In general, I have higher production ratios for tasks about which I care about the value of what is produced, or for tasks in which I am currently skilled.
To choose two of the heading items, I cook far less often than I code. Becoming good at cooking (quality of production) is not something I care to focus on, so I choose a PCR that increases only the quality of my consumption, which is realized by spending more money on easier foods, and freeing more time for other things (like coding). Alternately, I choose a coding PCR along the lines that you describe, which is to optimize the quality of my production by learning from others without losing time to learn from my own experience.
I would do well to shuffle these ratios a bit. I spend too little time reading code (and coding articles) because I overestimate my abilities relative to others (and so perhaps the value of time spent coding). And I spend too little time buying strange ingredients and reading allrecipes.com because I underestimate my abilities relative to others (and so perhaps the value of time spent cooking). I overemphasize the value of my current skills over the value of what other skills could be with effort.
Great post, this way of thinking about it is very revealing.
I currently working for a large game company, and while I have no plans of leaving, I feel as if at some point I would like to branch off and work on a more self-motivated project.
The cost analysis of this isn’t easy for me. My position in the game industry is somewhat atypical; I joined one of the most promising teams in the world directly out of college, and have since done good work on our recently released project. From within the studio, advice on where to direct my career seems pretty risk adverse. We have high paying, extremely stable jobs which aren’t guaranteed to be reclaimable if we exit, and with rewards becoming more lucrative the longer we stay. Two questions for you then:
Why did you recently quit your job? Is you main motivation along the lines of expectations of future earnings, a satisfaction in the intellectual worth of projects you choose to work on, or satisfaction in the worth of your product to others?
Where would you have ranked yourself in terms of your success in the large-scale game industry? I have a hard time comparing my situation to others, because as mentioned I feel as though my case is atypical and thus with potential for more higher payoffs if I choose rightly with my future. Having worked only a year and the half in the industry, I will be giving a talk at GDC this year about work done on our last AAA console title, which seems on the surface like something to make that case, although I recognize the major risk of self-aggrandizement here.
What city do you live in, if I may ask?