It’s obvious to us that the prompts are lying; how do you know it isn’t also obvious to the AI? (To the degree it even makes sense to talk about the AI having “revealed preferences”)
Dacyn(David Simmons)
Calvinists believe in predestination, not Protestants in general.
Wouldn’t that mean every sub-faction recursively gets a veto? Or do the sub-faction vetos only allow the sub-faction to veto the faction veto, rather than the original legislation? The former seems unwieldy, while the latter seems to contradict the original purpose of DVF...
(But then: aren’t there zillions of Boltzmann brains with these memories of coherence, who are making this sort of move too?)
According to standard cosmology, there are also zillions of actually coherent copies of you, and the ratio is heavily tilted towards the actually coherent copies under any reasonable way of measuring. So I don’t think this is a good objection.
“Only food that can be easily digested will provide calories”
That statement would seem to also be obviously wrong. Plenty of things are ‘easily digested’ in any reasonable meaning of that phrase, while providing ~0 calories.
I think you’ve interpreted this backwards; the claim isn’t that “easily digested” implies “provides calories”, but rather that “provides calories” implies “easily digested”.
In constructivist logic, proof by contradiction must construct an example of the mathematical object which contradicts the negated theorem.
This isn’t true. In constructivist logic, if you are trying to disprove a statement of the form “for all x, P(x)”, you do not actually have to find an x such that P(x) is false—it is enough to assume that P(x) holds for various values of x and then derive a contradiction. By contrast, if you are trying to prove a statement of the form “there exists x such that P(x) holds”, then you do actually need to construct an example of x such that P(x) holds (in constructivist logic at least).
Just a technical point, but it is not true that most of the probability mass of a hypothesis has to come from “the shortest claw”. You can have lots of longer claws which together have more probability mass than a shorter one. This is relevant to situations like quantum mechanics, where the claw first needs to extract you from an individual universe of the multiverse, and that costs a lot of bits (more than just describing your full sensory data would cost), but from an epistemological point of view there are many possible such universes that you might be a part of.
As I understood it, the whole point is that the buyer is proposing C as an alternative to A and B. Otherwise, there is no advantage to him downplaying how much he prefers A to B / pretending to prefer B to A.
Hmm, the fact that C and D are even on the table makes it seem less collaborative to me, even if you are only explicitly comparing A and B. But I guess it is kind of subjective.
It seems weird to me to call a buyer and seller’s values aligned just because they both prefer outcome A to outcome B, when the buyer prefers C > A > B > D and the seller prefers D > A > B > C, which are almost exactly misaligned. (Here A = sell at current price, B = don’t sell, C = sell at lower price, D = sell at higher price.)
Isn’t the fact that the buyer wants a lower price proof that the seller and buyer’s values aren’t aligned?
You’re right that “Experiencing is intrinsically valuable to humans”. But why does this mean humans are irrational? It just means that experience is a terminal value. But any set of terminal values is consistent with rationality.
Of course, from a pedagogical point of view it may be hard to explain why the “empty function” is actually a function.
When you multiply two prime numbers, the product will have at least two distinct prime factors: the two prime numbers being multiplied.
Technically, it is not true that the prime numbers being multiplied need to be distinct. For example, 2*2=4 is the product of two prime numbers, but it is not the product of two distinct prime numbers.
As a result, it is impossible to determine the sum of the largest and second largest prime numbers, since neither of these can be definitively identified.
This seems wrong: “neither can be definitively identified” makes it sound like they exist but just can’t be identified...
Safe primes area subset of Sophie Germain primes
Not true, e.g. 7 is safe but not Sophie Germain.
OK, that makes sense.
OK, that’s fair, I should have written down the precise formula rather than an approximation. My point though is that your statement
the expected value of X happening can be high when it happens a little (because you probably get the good effects and not the bad effects Y)
is wrong because a low probability of large bad effects can swamp a high probability of small good effects in expected value calculations.
Yeah, but the expected value would still be .
I don’t see why you say Sequential Proportional Approval Voting gives little incentive for strategic voting. If I am confident a candidate I support is going to be elected in the first round, it’s in my interest not to vote for them so that my votes for other candidates I support will count for more. Of course, if a lot of people think like this then a popular candidate could actually lose, so there is a bit of a brinksmanship dynamic going on here. I don’t think that is a good thing.
The definition of a derivative seems wrong. For example, suppose that for rational but for irrational . Then is not differentiable anywhere, but according to your definition it would have a derivative of 0 everywhere (since could be an infinitesimal consisting of a sequence of only rational numbers).
No, you can only use the geometric expected utility for nonnegative utility functions.