You can do it in a very calculation intensive manner: take all programs of less than K bits (with K sufficiently big) calculate their answer (to avoid halting problem wait for the answer only a finite but truly enormous number of step, for exemple 3^^^3 steps) and compare it to the answer given by the program predictor. Of course you can’t do that in any reasonable amount of time, which is why you’re using the “good enough to be improved on” program predictor to predict the result of the calculation.
Karl
In fact, I’m “asking” the program predictor to find the program which generate the best program predictor. It should be noted that the program predictor do not necesserly “consider” itself perfect: if you ask it to predict how many of the programs of less than M bits it will predict correctly, it won’t necesserly say “all of them” (in fact it shouldn’t say that if it’s good enouygh to be improved on).
I’m not sure what you’re accusing me of making harder than it need to be.
Could you clarify?
To the contrary, the danger arises because the AI will interact with us in the interim between input and output with requests for clarification, resources, and assistance. That is, it will realize that manipulation of the outside world is a permitted method in achieving its mission.
Except this is not the case for the AI I describe in my post.
The AI I describe in my post cannot make request for anything. It doesn’t need clarification because we don’t ask it question in a natural language at all! So I don’t think you’re criticism apply to this specific model.
Congratulation for raising the expected utility of the future!
Here in Spain, France, the UK, the majority of people are Atheists.
I would be interested in knowing where you got your numbers because the statistics I found definitively disagreed with this.
Color me interested.
I think the character creation rules really should be collected together. Has is, I can’t figure out how your supposed to determine many of your initial statistics (wealth level, hit points, speed...). Also I don’t like the fact that the number of points you have to distribute among the big five and among the skills are random. And of course a system where you simply divide X points among your stats however you wish is completely broken. You should really think about introducing some limit on the number of point you can put into one your stat and some sort of diminishing return.
But even if many part of the design are open to criticism what you’ve created is still very awesome.
ETA: There are a lot of references in the rules to a Faith stat, except that there is no such stat! Also, the righteousness stat is often called morality in the rules.
So you’re atheists are actually nay theists… If that’s the case I have difficulty imagining how a group containing both atheists and theists could work at all...
What if that square is now occupied?
I don’t think the part about summoning Death is a reference to anything. After all, we already know what the incarnations of Death are in MOR. And it looks like the conterspell to dismiss Death is lost no more thanks to Harry...
Rationals are dense in the reals so there are always a rational value between any two real numbers. So for example, if it can be proven in your formal system that A()=1 ⇒ U()=π and this happen to be the maximal utility attainable, there will be a rational number x greater than the utility that can be achieved by any other action and such that x≤π, so you will be able to prove that A()=1 ⇒ U()≥x and so the first action will end up being taken because by definition of x it is greater than the utility that can be obtained by taking any other action.
That doesn’t actually work. Take the Newcomb’s problem. Suppose that your formal system quickly prove that A()≠1. Then it conclude that A()=1 ⇒ U()∈[0,0], on the other it end up concluding correctly that A()=2 ⇒ U()∈[1000,1000] so it end up two boxing. This is a possible behavior, even if the formal system used is sound, if one use rational intervals as you recommend. On the other hand, as I have shown, if you chose t sufficiently large the algorithm I recommend in my post will necessarily end up one boxing if the formal system used is sound. (using intervals was actually the first idea I had when coming to term with the problem detailed in the post, but it simply doesn’t work.)
On the other hand, as I have shown, if you chose t sufficiently large the algorithm I recommend in my post will necessarily end up one boxing if the formal system used is sound.
This is incorrect, as Zeno had shown more than 2000 years ago. It could be that your inference system generates an infinite sequence of statements of the form A()=1 ⇒ U()≥Si with sequence {Si} tending to, say, 100, but with all Si<100, so that A()=1 loses to A()=2 no matter how large the timeout is.
That’s why you enumerate all proofs of statement of the form A()=a ⇒ U()≥u (where u is rational number in canonical form). It’s a well known fact that it is possible to enumerate all the provable statements in a given formal system without skipping any.
This is a possible behavior, even if the formal system used is sound, if one use rational intervals as you recommend.
Not if we use a Goedel statement/chicken rule failsafe like the one discussed in Slepnev’s article you linked to.
There are some subtleties about doing this in the interval setting which made me doubt that it could be done, but after thinking about it some more I must admit that it is possible.
But I think that my algorithm for the non-oracle setting is still valuable.
I’m confused. What’s the distinction between x and here?
I don’t think taking polyjuice modify your genetic code. If that was the case, using polyjuice to take the form of a muggle or a squib would leave you without your magical powers.
It’s explained in detail in chapter 25 that the genes that make a person a wizard do not do so by building some complex machinery which allow you to become a wizard; the genes that make you a wizard constitute a marker which indicate to the source of magic that you should be allowed to cast spells.
Proposition p is meaningful relative to the collection of possible worlds W if and only if there exist w, w’ in W such that p is true in the possible world w and false in the possible world w’.
Then the question become: to be able to reason in all generality what collection of possible worlds should one use?
That’s a very hard question.
Also I totally think there was a respectable hard problem
So you do have a solution to the problem?
Actually, dealing with a component of your ontology not being real seems like a far harder problem than the problem of such a component not being fundamental.
According to the Great Reductionist Thesis everything real can be reduced to a mix of physical reference and logical reference. In which case, if every component of your ontology is real, you can obtain a formulation of your utility function in terms of fundamental things.
The case where some components of your ontology can’t be reduced because they’re not real and where your utility function refer explicitly to such entity seem considerably harder, but that is exactly the problem that someone who realize God doesn’t actually exist is confronted with, and we do manage that kind of ontology crisis.
So are you saying that the GRT is wrong or that none of the things that we value are actually real or that we can’t program a computer to perform reduction (which seems absurd given that we have managed to perform some reductions already) or what? Because I don’t see what you’re trying to get at here.
A) is very hard to test given the restriction on using magic around muggles. As for B), powerful spells are mostly restricted by the edict of Merlin. C) is, as you pointed out, extremely difficult to research effectively. I’m more surprised that Harry never bothered to ask how new charms are discovered. After all, how are you supposed to figure out that you are supposed to say “Wingardium Leviosa” and then move your wand in a certain way? And he as been told that new charms were discovered every year, so we know it’s possible.