Doesn’t matter until the switch is done.
I am into something that can be called “meta-politics”: institutional reform. That is, crafting decisionmaking algorithms to have good characteristics — incentives, participation, etc. — independent of the object-level goals of politics. I think this is “meta” in a different way than what you’re talking about in this article; in short, it’s prescriptive meta, not descriptive meta. And I think that makes it “OK”; that is, largely exempt from the criticisms in this article.
Would you agree?
I believe that Bitcoin is a substantial net negative for the world. I think that blockchain itself, even without proof of work, is problematic as a concept — with some real potential upsides, but also real possibly-intrinsic downsides even apart from proof of work. I’d like a world where all PoW-centric cryptocurrency was not a thing (with possible room for PoW as a minor ingredient for things like initial bootstrapping), and crypto in general was more an area of research than investment for now. I think that as long as >>90% of crypto is PoW, it’s better (for me, at least) to stay away entirely rather than trying to invest in some upstart PoS coin.
#2. Note that even if ETH does switch in the future, investing in ETH today is still investing in proof-of-work. Also, as long as BTC remains larger and doesn’t switch, I suspect there’s likely to be spillover between ETH and BTC such that it would be difficult to put energy into ETH without to some degree propping up the BTC ecosystem.
I feel it’s worth pointing out that all proof-of-work cryptocurrency is based on literally burning use-value to create exchange-value, and that this is not a sustainable long-term plan. And as far as I can tell, non-proof-of-work cryptocurrency is mostly a mirage or even a deliberate red herring / bait-and-switch.
I’m not an expert, but I choose not to participate on moral grounds. YMMV.
I realize that what I’m saying here is probably not a new idea to most people reading, but it seems clearly enough true to me that it bears repeating anyway.
If anyone wants links to further arguments in this regard, from me rather than Google, I’d be happy to provide.
If we’re positing a Grahamputer, then “yeah but it’s essentially the same if you’re not worried about agents of equal size” seems too loose.
In other words, with great compute power, comes great compute responsibility.
Thanks for pointing that out. My arguments above do not apply.
I’m still skeptical. I buy anthropic reasoning as valid in cases where we share an observation across subjects and time (eg, “we live on a planet orbiting a G2V-type star”, “we inhabit a universe that appears to run on quantum mechanics”), but not in cases where each observation is unique (eg, “it’s the year 2021, and there have been about 107,123,456,789 (plus or minus a lot) people like me ever”). I am far less confident of this than I stated for the arguments above, but I’m still reasonably confident, and my expertise does still apply (I’ve thought about it more than just what you see here).
Our sense-experiences are “unitary” (in some sense which I hope we can agree on without defining rigorously), so of course we use unitary measure to predict them. Branching worlds are not unitary in that sense, so carrying over unitarity from the former to the latter seems an entirely arbitrary assumption.
A finite number (say, the number of particles in the known universe), raised to a finite number (say, the number of Planck time intervals before dark energy tears the universe apart), gives a finite number. No need for divergence. (I think both of those are severe overestimates for the actual possible branching, but they are reasonable as handwavy demonstrations of the existence of finite upper bounds)
I don’t think the point you were arguing against is the same as the one I’m making here, though I understand why you think so.
My understanding of your model is that, simplifying relativistic issues so that “simultaneous” has a single unambiguous meaning, total measure across quantum branches of a simultaneous time slice is preserved; and your argument is that, otherwise, we’d have to assign equal measure to each unique moment of consciousness, which would lead to ridiculous “Bolzmann brain” scenarios. I’d agree that your argument is convincing that different simultaneous branches have different weight according to the rules of QM, but that does not at all imply that total weight across branches is constant across time.
I didn’t do this problem, but I can imagine I might have been tripped up by the fact that “hammer” and “axe” are tools and not weapons. In standard DnD terminology, these are often considered “simple weapons”; distinct from “martial weapons” like warhammer and battleaxe, but still within the category of “weapons”.
I guess that the “toolish” abstractions might have tipped me off, though. And even if I had made this mistake, it would only have mattered for “simple-weapon” tools with a modifier.
This is certainly a cogent counterargument. Either side of this debate relies on a theory of “measure of consciousness” that is, as far as I can tell, not obviously self-contradictory. We won’t work out the details here.
In other words: this is a point on which I think we can respectfully agree to disagree.
It seems to me that exact duplicate timelines don’t “count”, but duplicates that split and/or rejoin do. YMMV.
I think both your question and self-response are pertinent. I have nothing to add to either, save a personal intuition that large-scale fully-quantum simulators are probably highly impractical. (I have no particular opinion about partially-quantum simulators — even possibly using quantum subcomponents larger than today’s computers — but they wouldn’t change the substance of my not-in-a-sim argument.)
Yes, your restatement feels to me like a clear improvement.
In fact, considering it, I think that if algorithm A is “truly more intelligent” than algorithm B, I’d expect if f(x) is the compute that it takes for B to perform as well or better than A, f(x) could even be super-exponential in x. Exponential would be the lower bound; what you’d get from a mere incremental improvement in pruning. From this perspective, anything polynomial would be “just implementation”, not “real intelligence”.
Though I’ve posted 3 more-or-less-strong disagreements with this list, I don’t want to give the impression that I think it has no merit. Most specifically: I strongly agree that “Institutions could be way better across the board”, and I’ve decided to devote much of my spare cognitive and physical resources to gaining a better handle on that question specifically in regards to democracy and voting.
Third, separate disagreement: This list states that “vastly more is at stake in [existential risks] than in anything else going on”. This seems to reflect a model in which “everything else going on” — including power struggles whose overt stakes are much much lower — does not substantially or predictably causally impact outcomes of existential risk questions. I think I disagree with that model, though my confidence in this is far, far less than for the other two disagreements I’ve posted.
Separate point: I also strongly disagree with the idea that “there’s a strong chance we live in a simulation”. Any such simulation must be either:
fully-quantum, in which case it would require the simulating hardware to be at least as massive as the simulated matter, and probably orders of magitude more massive. The log-odds of being inside such a simulation must therefore be negative by at least those orders of magnitude.
not-fully-quantum, in which case the quantum branching factor per time interval is many many many orders of magnitude less than that of an unsimulated reality. In this case, the log-odds of being inside such a simulation would be very very very negative.
based on some substrate governed by physics whose “computational branching power” is even greater than quantum mechanics, in which case we should anthropically expect to live in that simulator’s world and not this simulated one.
Unlike my separate point about the great filter, I can claim no special expertise on this; though both my parents have PhDs in physics, I couldn’t even write the Dirac equation without looking it up (though, given a week to work through things, I could probably do a passable job reconstructing Shor’s algorithm with nothing more than access to Wikipedia articles on non-quantum FFT). Still, I’m decently confident about this point, too.
Strongly disagree about the “great filter” point.
Any sane understanding of our prior on how many alien civilizations we should have expected to see is structured (or at least, has much of its structure that is) more or less like the Drake equation: a series of terms, each with more or less prior uncertainty around it, that multiply together to get an outcome. Furthermore, that point is, to some degree, fractal; the terms themselves can be — often and substantially, though not always and completely — understood as the products of sub-terms.
By the Central Limit Theorem, as the number of such terms and sub-terms increases, this prior approaches a log-normal distribution; that is, if you take the inverse (proportional to the amount of work we’d expect to have to do to find the first extraterrestrial civilization), the mean much higher than the median, dominated by a long upper tail. That point applies not just to the prior, but to the posterior after conditioning on evidence. (In fact, as we come to have less uncertainty about the basic structure of the Drake-type equation — which terms it comprises, even though we may still have substantial uncertainty about the values of those terms — the argument that the posterior must be approximately log-normal only grows stronger than it was for the prior.)
In this situation, given the substantial initial uncertainty about the value of the terms associated with steps that have already happened, the evidence we can draw from the Great Silence about any steps in the future is very, very weak.
As a statistics PhD, experienced professionally with Bayesian inference, my confidence on the above is pretty high. That is, I would be willing to bet on this at basically any odds, as long as the potential payoff was high enough to compensate me for the time it would take to do due diligence on the bet (that is, make sure I wasn’t going to get “cider in my ear”, as Sky Masterson says). That’s not to say that I’d bet strongly against any future “Great Filter”; I’d just bet strongly against the idea that a sufficiently well-informed observer would conclude, post-hoc, that the bullet point above about the “great filter” was at all well-justified based on the evidence implicitly cited.
I’m not sure if this comment goes best here, or in the “Against Strong Bayesianism” post. But I’ll put it here, because this is fresher.
I think it’s important to be careful when you’re taking limits.
I think it’s true that “The policy that would result from a naive implementation of Solomonoff induction followed by expected utility maximization, given infinite computing power, is the ideal policy, in that there is no rational process (even using arbitrarily much computing power) that leads to a policy that beats it.”
But say somebody offered you an arbitrarily large-and-fast, but still finite, computer. That is to say, you’re allowed to ask for a google-plex operations per second and a google-plex RAM, or even Graham’s number of each, but you have to name a number then live with it. The above statement does NOT mean that the program you should run on that hyper-computer is a naive implementation of Solomonoff induction. You would still want to use the known tricks for improving the efficiency of Bayesian approximations; that is, things like MCMC, SMC, efficient neural proposal distributions with importance-weighted sampling, efficient pruning of simulations to just the parts that are relevant for predicting input (which, in turn, includes all kinds of causality logic), smart allocation of computational resources between different modes and fallbacks, etc. Such tricks — even just the ones we have already discovered — look a lot more like “intelligence” than naive Solomonoff induction does. Even if, when appropriately combined, their limit as computation goes to infinity is the same as the limit of Solomonoff induction as computation goes to infinity
In other words, saying “the limit as amount-of-computation X goes to infinity of program A, strictly beats program B with amount Y of finite computation, for any B and Y”; or even “the limit as amount-of-computation X goes to infinity of program A, is as good or better than the limit as amount-of-computation Y goes to infinity of program B, for any B” … is true, but not very surprising or important, because it absolutely does not imply that “as computation X goes to infinity, program A with X resources beats program B with X resources, for any B”.
PLACE is compatible with primaries; primaries would still be used in the US.
Thus, PLACE has all the same (weak) incentives for the local winner to represent any nonpartisan interests of the local district, along with strong incentives to represent the interests of their party X district combo. The extra (weaker) incentives for those other winners who have the district in their territory to represent the interests of their different party X district combos, to fill out the matrix, make PLACE’s representation strictly better.