In the Least Convenient Possible World of this hypothetical, every dust speck causes a constant small amount of harm with no knock-on effects(no avoiding buses, no crashing cars...)
Quill_McGee
The system for generating new fields of research? After all, if it generates other areas that are no longer philosophy reasonably regularly, then that actually creates value.
I assume you either linked to this in the post, or it has been mentioned in the comments, but I did not catch it in either location if it was present, so I’m linking to it anyway: http://intelligence.org/files/Non-Omniscience.pdf contains a not merely computable but tractable algorithm for assigning probabilities to a given set of first-order sentences.
Does it count if the state of trying lasted for a long(but now ended) time? because if so, I kept on trying to create a bijection between the reals and the wholes, until I was about 13 and found an actual number that I could actually write down that none of my obvious ideas could reach, and find an equivalent for all the non obvious ones.( 0.21111111..., by the way)
On the contrary, this is what the Litany of Tarski states.
I don’t think he was talking about self-PA, but rather an altered decision criteria, such that rather that “if I can prove this is good, do it” it is “if I can prove that if I am consistent then this is good, do it” which I think doesn’t have this particular problem, though it does have others, and it still can’t /increase/ in proof strength.
After a bit of thought, I believe I’ve found a basically permanent solution for this. I use word replacer (not sure how to add links without just posting them, you can google it, it is in the chrome web store) with a bunch of rules to enforce ‘they’ as default. If you put rules for longer strings at the top they match first (‘he is’ to ‘they are’ at the top with ‘he’ to ‘they’ lower down, for example)
You will have to put up with some number mismatch unless you want to add a rule for every verb in English (‘they puts’), but I feel that that is an acceptable sacrifice.
EDIT: another issue: If you are actually talking about pronouns, you will have to temporarily disable it for things to make any sense whatsoever, and it doesn’t seem to have a way to disable it on a specific page unlike the service I was using it to replace, so you have to use the extensions screen in settings.
EDEDIT: Also, and this is bothering me enough that I might actually stop using this, is ‘her’ as a pronoun versus ‘her’ as a possessive. for example in ‘Get to know her’ versus ‘I found her wallet’. The first should be ‘Get to know them’ wheras the second one should be ‘I found their wallet’, and I’m not sure what to do about that. If I find/build an extension which can interface with a list of english words with part-of-speech tagging and have rules like ‘her’->‘them’, ‘her ’->‘their ’, then that’d work, but as is it is bugging me.
A way to communicate Exists(N) and not Exists(S) in a way that doesn’t depend on the context of the current conversation might be “”Santa” exists but Santa does not.” Of course, the existence of “Santa” is granted when “Santa does not exist” is understood by the other person, so this is really just a slightly less ambiguous way of saying “Santa does not exist”
Darn it, and I counted like five times to make sure there really were 10 visible before I said anything. I didn’t realize that the stone the middle-top stone was on top of was one stone, not two.
http://www.fungible.com/respect/index.html This looks to be very related to the idea of “Observe someone’s actions. Assume they are trying to accomplish something. Work out what they are trying to accomplish.” Which seems to be what you are talking about.
“[[ My favorite “other” referral was someone who checked the URL on tinychat entirely be coincidence, before it was passworded. ]]”
Yep, that was surprisingly successful. I also had success with that tactic on fimfiction.net, though that produced fewer useful results.
(also, unless there’s another 15-year-old, I look to be the youngest.)
I was thinking of the “feeling bad and reconsider” meaning. That is, you don’t want regret to occur, so if you are systematically regretting your actions it might be time to try something new. Now, perhaps you were acting optimally already and when you changed you got even /more/ regret, but in that case you just switch back.
In the Least Convenient Possible World of this hypothetical, each and every dust speck causes a small constant amount of harm, with no knock-on effects(no increasing one’s appreciation of the moments when one does not have dust in ones eye, no preventing a ‘boring painless existence,’ nothing of the sort). Now it may be argued whether this would occur with actual dust, but that is not really the question at hand. Dust was just chosen as being a ‘seemingly trivial bad thing.’ and if you prefer some other trivial bad thing, just replace that in the problem and the question remains the same.
“if I can prove that if a version of me with unbounded computational resources is consistent then this is good, do it”
In this formalism we generally assume infinite resources anyway. And even if this is not the case, consistent/inconsistent doesn’t depend on resources, only on the axioms and rules for deduction. So this still doesn’t let you increase in proof strength, although again it should help avoid losing it.
I think that what Joshua was talking about by ‘infinite loop’ is ‘passing through the same state an infinite number of times.’ That is, a /loop/, rather than just a line with no endpoint. although this would rule out (some arbitrary-size int type) x = 0; while(true){ x++; } on a machine with infinite memory, as it would never pass through the same state twice. So maybe I’m still misunderstanding.
There might be one more stone not visible?
It should be noted that if measured IQ is fat-tailed, this is because there is something wrong with IQ tests. IQ is defined to be normally distributed with a mean of 100 and a standard deviation of either 15 or 16 depending on which definition you’re using. So if measured IQ is fat-tailed, then the tests aren’t calibrated properly(of course, if your test goes all the way up to 160, it is almost inevitably miscalibrated, because there just aren’t enough people to calibrate it with).
Well, this comes up different ways under different interpretations. If there is a chance that I am being simulated, that is this is part of his determining my choice, then I give him $100. If the coin is quantum, that is there will exist other mes getting the money, I give him $100. If there is a chance that I will encounter similar situations again, I give him $100. If I were informed of the deal beforehand, I give him $100. Given that I am not simulated, given that the coin is deterministic, and given that I will never again encounter Omega, I don’t think I give him $100. Seeing as I can treat this entirely in isolation due to these conditions, I have the choice between -$100 and $0, of which two options the second is better. Now, this runs into some problems. If I were informed of it beforehand, I should have precommitted. Seeing as my choices given all information shouldn’t change, this presents difficulty. However, due to the uniqueness of this deal, there really does seem to be no benefit to any mes from giving him the money, and so it is purely a loss.
(aware that this is 2 years late, just decided to post) I find that I work, on average,somewhere between 2-3 times as fast when I am right up next to a deadline,than when I have plenty of time.
Wasn’t Löb’s theorem ∀ A (Provable(Provable(A) → A) → Provable(A))? So you get Provable(⊥) directly, rather than passing through ⊥ first. This is good, as, of course, ⊥ is always false, even if it is provable.