I can highly recommend everything I have read by Niven. Many of his works are really well done fairly “hard” Sci-fi, particularly the Ringworld series (the titular object is related to Dyson spheres, and has been called a “Niven ring” in his honor). I just finished Destiny’s Road, and I couldn’t put it down. The Mote In Gods Eye is an amazing collaboration with Pournelle, and a classic to boot. The last is the only one I saw mentioned elsewhere, but if you enjoy any of these, you’ll likely enjoy the rest too.
jschulter
I’m currently writing a program (in C) for my continuum mechanics class to simulate crowd physics (just in 2D) using nearest neighbor potentials. Once I get it running, I’ll simulate a “Black Friday” type event with a linear attractive potential and various barriers, and then see if I can get and then avoid crushing “deaths”. I’m also in the process of trying to be more social, actually actively trying to make friends and interact with my peers instead of holing up in my room all day. Thus far, I’ve noticed a distinct increase in my overall happiness as a result of this so far, and my academic performance unexpectedly hasn’t even wavered.
Were you planning on running the game in person, or would there be a chance of doing it remotely. I’ve only had a little experiences with role-playing games, but I enjoyed it quite a lot.
Well, of course we would! Executing an action based on the truth of a hypothesis while trying to determine whether its true or not would be somewhat odd.
Another option:
it’s morally acceptable to terminate a conscious program if it wants to be terminated
it’s morally questionable(wrong, but to lesser degree) to terminate a conscious program against its will if it is also possible to resume execution
it is horribly wrong to turn off a conscious program against its will if it cannot be resumed(murder fits this description currently)
performing other operations on the program that it desires would likely be morally acceptable, unless the changes are socially unacceptable
performing other operations on the program against its will is morally unacceptable to a variable degree (brainwashing fits in this category)
These seem rather intuitive to me, and for the most part I just extrapolated from what it is moral to do to a human. Conscious program refers here to one running on any system, including wetware, such that these apply to humans as well. I should note that I am in favor of euthanasia in many cases, in case that part causes confusion.
I’m taking note of the latter and adding it to my list of “books to read when I have time and motivation for independent education.” The addition of Scheme to the application of mechanics does seem quite useful from what I can tell after a cursory look. And there’s a nice bit more mathematical rigor than I had the luxury of in my physics classes. Overall, it looks like this text takes an approach that I’ll like a lot, once I get to it.
For the record, I’m a physics and mathematics undergrad, graduating next May. My schools physics program recently decided to actually start making us apply that programming they had us learn; I might consider trying Scheme instead of C if I feel like it.
I suppose the correct value is probably around 3000 m.
I know of no large mountains to be found in Sweden, so I’m guessing what seems to be a reasonably low number.
I’m a college student too, and just about finished with my application. The form really does make it seem like it’s targeted at people who have already received at least one degree, but I wouldn’t be surprised if some promising undergrads made it in.
Well, I have encountered people being (or claiming to be) offended by what in all rights would be an assault on someone else’s status. This could be a form of empathy, or in many cases an attempt to gain status themselves through a show of sympathy. This does seem like a potential occurrence of legitimate offense not caused by a perceived direct or indirect threat to the status of the person being offended, iff the offense is genuine- something which I cannot personally attest to, never having experienced this myself.
I think one of the things that makes learning things hard, given this interpretation, would be difficulty in actually updating the model. It may be that large amounts of surprise, being related to large differences in model produced by updating, make it hard to update, and this is certainly one level of hardness felt when learning. But additionally, there is also likely to be some variance in general ability to update certain models: some people have limited kinesthetic senses would not only be operating with less data to update on, but may also have a more rigid model.
Model rigidity seems to me like a good candidate for the variance between students’ subjective experience of the hardness of learning certain things. It also seems like it would be strongly correlated to the appropriate types of intelligence- kinesthetic intelligence relates to a more easily changed model of physical syntax, procedural intelligence relates to a more easily changed model of procedural syntax, &c.
This also seems to correspond well to my own personal experiences with what is hard and easy to learn- my understanding of how the different elements of the problem can interact changes with speed proportional to how easy the subject seems, eg I can change my understanding of how abstract quantities/qualities interact fairly quickly making math easy to learn, my understanding of systems of social interaction changes very slowly (due in part to difficulty collecting evidence) and thus I was socially awkward for a long time, and it took a lot of effort to overcome.
I saw it more as opposing restrictions on one’s ability to hit oneself in the head with a baseball bat every week. I’m not saying anyone should do it, but if they really want to I don’t feel I have the right to stop them.
The odds of winning the lottery are ordinarily a billion to one. But now the branch in >which you win has your “measure”, your “amount of experience”, temporarily >multiplied by a trillion. So with the brief expenditure of a little extra computing power, >you can subjectively win the lottery—be reasonably sure that when next you open >your eyes, you will see a computer screen flashing “You won!”
As I see it, the odds of being any one of those trillion “me”s in 5 seconds is 10^21 to one(one trillion times one billion). since there are a trillion ways for me to be one of those, the total probability of experiencing winning is still a billion to one. To be more formal:
P(“experiencing winning”)=sum(P(“winning”|”being me #n”)P(“being me #n”)) =sum(P(“winning” and “being me #n”))=10^12*10^-21=10^-9 since “being me #n” partitions the space.
Overall this means I:
anticipate not winning at 5 sec.
anticipate not winning at 15 sec.
don’t have super-psychic-anthropic powers
don’t see why anyone has an issue with this
Checking consistency just in case:
p(“experience win after 15s”) = p(“experience win after 15s”|”experience win after >5s”)p(“experience win after 5s”) + p(“experience win after 15s”|”experience not-win >after 5s”)p(“experience not-win after 5s”).
p(“experience win after 15s”) = (~1)*(10^-9) + (~0)(1-10^-9)=~10^-9=~p(“experience win after 5s”)
Additionally, I should note that the total amount of “people who are me who experience winning” will be 1 trillion at 5 sec. and exactly 1 at 15 sec. This is because those trillion “me”s must all have identical experiences for merging to work, meaning the merged copy only has one set of consistent memories of having won the lottery. I don’t see this as a problem, honestly.
- Dec 12, 2011, 9:36 PM; 0 points) 's comment on The Anthropic Trilemma by (
Posting this before reading the comments to give a summary/response based on my own internal experiences. Quick note: I’m extremely good at internalizing/manipulating information, and about proficient at “reacting”. It might also be worth noting sex (I’m male), since I could definitely see these kinds of thought processes being different on the two standard systems.
This analysis is definitely subject to the “generalizing from one example” problem, considering some large differences between the thought mechanisms you mention and my own. One telling example is the programming/reacting analogy: when programming(and writing, after the first stage of composition) I have this tendency to “hold the whole program in my head” as I’ve heard it called, and in doing so I don’t use an internal monologue at all. In fact, when I’m solving most problems(math, spatial manipulations, logic puzzles) in my mind, my internal monologue is silent, and rather I’m working silently in my headspace- my reasoning methods feel spatial, rather than verbal. When working in a group (cooking is the closest example of “reacting” that I can relate to in terms of necessitated efficiency/urgency) the monologue is still silent and I’m solving problems through psuedospatial manipulation; the significantly smaller amount of problem solving necessary does tend to allow the problem/solution to just remain static in my head for most of the time though while I engage in physical tasks, rather than actively solving it. This for me, leads to a sense that very little focus is used while reacting; some tasks (mincing garlic, dicing onions(crying makes it harder), &c.) however may require close attention, if physically complicated, and this might be the other kind of focus you mention. I can, overall, add another confirming data point to the “silencing your internal monologue is helpful/necessary for reacting properly” hypothesis though.
I also have some possible suggestions, though mileage will likely vary very extremely:
silencing ones internal monologue can be aided by meditation- in fact, they are practically equivalent- so the initial meditation exercises, to “clear ones mind” may prove useful in getting used to doing this, and possibly make it easier.
there’s no need to practice silencing your internal monologue only while “reacting”-try doing it during everyday tasks where intense thought isn’t necessary(eg brushing your teeth), and it might become that much easier.
if your brain works like mine, you may be able to delegate certain tasks to parts of your mind not directly linked to what you consider “you” (one notably common example is how sometimes you realize the solution to a problem you were working on a while ago but not actively thinking about), and if you can get good at this, it works better(for me) than memorizing responses- just let yourself respond on automatic.
From what I saw, it seems they figured out that that was their best bet (somehow) fairly quickly. Once Watson lost control, the other two lost very little time in going for the big points.
This is even easier to game: assuming the school has any merit, any individual you ask should have good incentive to simply say “50%” guaranteeing a perfect score. The very first time you used the test it might be okay, but only if nobody knew that the school’s reputation was at stake.
They surprised me too. (I actually felt the urge to use an unnecessary exclamation point there the priming’s made me so enthusiastic...)
And I think that the status gained from the fact that you noticed being primed probably outweighed any lost due to it us being told it happened. Though now that we’re noticing it, we need to decide which frequency of upvoting we should be using so we can avoid the effect.
I approached it similarly (as part of a more general attempt, since this is a minor use of the word), positing the “I could lift that box over there” was a comparison of the physical prowess necessary to complete the task and the amount I currently possess. In Eliezer’s formulation, this is equivalent to determining reachability with constraints, but it’s more of an example of the general procedure than an explanation of it, unfortunately. I’m glad to see that someone else was thinking similarly though.
Even including Harry Potter and his sudden ability to move particular objects discontinuously 100 years into the future by snapping his fingers, my claim stands. The point is regarding the instantaneous movement of every part of the universe to its future position, in which case inhabitants of the universe will see the signal (fingers snapping) and see nothing out of the ordinary happen. These observers will even continue to observe what happens throughout the next 100 years, or at least it will be indicated as such with 100% complete consistency in any and all records present at the end of those 100 years, including the memories of every living being. The only difference when including Harry in the picture is that our fundamental description of the physical laws change; when the whole universe is moved, not a single one of their consequences is distinguishable from time progressing normally, thus they are still equivalent statements. By introducing unphysical Harry, we develop a way to distinguish the two explanations, but this is irrelevant to our reality.
if someone snapped their fingers and instantly moved all objects to their 100 years hence positions, it would not be the future
I beg to differ. Everybody would remember the motion having taken place; the history of that 100 years would be recorded. There is no way in principle to experimentally distinguish this occurrence from the normal progression of time by 100 years, so I claim they are the same.
well, when you do, I’d definitely like to play :)