Everyone choosing how their share of ressources is used has the problem that everyone might be horrified at what someone else is doing.
They would only notice that by ever writing a post.
If there is a 50-50 chance of foom vs non-foom, and in the non-foom scenario we expect to acquire enough evidence to get an order of magnitude more funding, then to maximize the chance of a good outcome we, today, should invest in the foom scenario because the non-foom scenario can be handled by more reluctant funds.
Let us consider such a conserved karma system. For every group of users that gets upvoted by outsiders more than they upvote outsiders, their karma is going to increase until the increase to their voting power produces an equilibrium. Consider such a powerful group that tends to upvote each other a lot, no conspiracy required. Their posts are going to be more visible without the group spending any of their collective power to make it happen. More visible posts will get more upvotes, compounding the group’s power with interest. There are combinatorially many potential groups, and this karma system would naturally seek out the groups that best fit the above story, and grant them power.
I doubt that there’s any moral difference between running a person and asking a magical halting oracle what they would have said.
Why do you seem so sure about this? I see no moral argument for whether we should rather have a 7 billion humans or a thousand, all else being equal. (Of course, there’s also no acceptable way to move from the former to the latter.) (Both the availability of commons and the economies of scale for goods, services and research should not play a role in this moral calculus.)
This anthropic evidence gives you a likelihood function. If you want a probability distribution, you additionally need a prior probability distribution.
Proves too much: This would give ~the same answer for any other future event that marks the end of some duration that started in the last century.
Can’t we just say something like “Optimize e^(-x²). The Taylor series converges, so we can optimize it instead. Use a partial sum as a proxy. Oops, we chose the worst possible value. Should have used another mode of convergence!“?
Because “unfortunately” we are out of boardgames, and this might find another one.
PA+1 can already provide this workflow: Given that nPA proves s and that PA proves all that nPA does, we can get that PA can prove s, and then use the +1 to prove s. And then nnPA can still be handled by PA+1.
Could we train AlphaZero on all games it could play at once, then find the rule set its learning curve looks worst on?
Producing a strategic advantage for any party at all that is decisive enough to safely disarm the threat of nuclear war.
Acausal trade on even footing with distant superintelligences.
If our physics happen to allow for an easy way to destroy the world, then the way we do science, someone will think of it, someone will talk, and someone will try it. If one superintelligent polymath did our research instead, we don’t lose automatically if some configuration of magnets, copper and glass can ignite the atmosphere.
Let f map each prover p1 to one adding (at least) the rule of inference of “If _(p1) proves that _(p1) proves all that p2 does, then f(p1) proves all that p2 does.”
It is unclear which blanks are filled with f and which with the identity function to match your proposal. The last f must be there because we can only have the new prover prove additional things. If all blanks are filled with f, f(p1) is inconsistent by Löb’s theorem and taking p2 to be inconsistent.
Investors would prefer to invest in moonshot megaprojects over, like, infrastructure megaprojects. Does this also prove too much?
If after 10% of the time and the budget, the startup can tell that success is very unlikely, should they be incentivized to abort? Because the current setup would seem to have them chug along until the budget is gone.
Seems misaligned. Shareholders would prefer a project that they predict will deterministically use exactly its budget to first bet all on black in a casino, then either be immediately bankrupt or able to complete and then pay out its original budget.
That is of course true for anyone who buys a call option right now as well.
Saying that the problem is about computability because there is no computable solution proves too much: I could reply that it is about complexity theory because there is no polynomial-time solution. (In fact, there is no solution.)
We can build something like a solution by specifying that descriptions must be written in some formal language that cannot describe its own set of describables, then use a more powerful formal language to talk about that previous language’s set. For powerful enough languages, that’s still not computable, though, so computability theory wouldn’t notice such a solution, which speaks against looking at this through the lens of computability theory.
Be careful stating what physics can’t prove.
That still doesn’t make computability relevant until one introduces it deliberately. Compare to weaker notions than computability, like computability in polynomial time. Computability theory also complains the same once we have explicitly made definability subjective, and should have no more logical problems.
Introducing a handicap to compensate for an asymmetry does not preclude us from the need to rely on the underlying process pointing towards truth in the first place.