This is fun. You might consider looking into dynamical systems, since this is in effect what you are studying here. The general idea for a dynamical system is that you have some state whose derivative is given by some function . You can look at the fixed points of such systems, and characterize their behavior relative to these. The notion of bifurcation classifies what happens as you change the parameters in a similar way to what you’re doing
There are maybe 2 weird things you’re doing from this perspective. The first is the max function, which is totally valid, though usually people study these systems with continuous and nonlinear functions . What you’re getting with it falling into a node and staying is a consequence of the system being otherwise linear. Such systems are pretty easy to characterize in terms of their fixed points. The other weird thing is time dependence; normally these things are given by with no time dependence, called autonomous systems. I’m not entirely clear how you’re implementing the preference decay, so I can’t say too much there.
As for the specific content, give me a bit to read more.
waveBidder
John Ioannidis, of all people, who should know better.
I’m somewhat flabbergasted that no one has mentioned radicalxChange or quadratic funding. They’re a solution that doesn’t force the producer to take on all the risks of failure, at the cost of needing a centralized pot of money.
You’re missing the very real possibility of long-term negative side-effects from the vaccine, such as triggering an auto-immune disease or actually increasing your susceptibility, both mentioned in the whitepaper (whose risk-assessment I would be pretty sceptical of). I would think of this as more a trade-off between risks of side effects and COVID risks, rather than whether or not you can afford it.
There is a power imbalance in place.
this is precisely the argument that cancel culture often makes, often with good reason, with outside actors piling on what may have started as a parochial dispute.
There’s generally a simpler explanation in this case that Trump and the Joint chiefs of staff have had a rocky relationship, so the military has no interest in assisting a coup attempt, even if they are willing to renounce democratic norms (they are sworn to protect the constitution, after all). Without cooperation from the military a coup is a non-starter.
Why doesn’t district 9 count? I get that South Africa is a very different place than the rest of the sub-continent, but that would be like saying a movie about Mexico doesn’t count as North American.
Your description reminds me somewhat of Slaughterhouse 9, with its focus on how war would be perceived from an alien perspective, though I guess the moral clarity that that book gets from being written after a war with clear victors is not available to us w.r.g. to the state of the middle east. I second thetruejacob that it is difficult to find any reference to the work; does it exist in English somewhere? It does sound worthwhile
What needs to be assumed when reasoning about existential risk, and how are the high stakes responsible for forcing us to assume it?
I guess I opted for too much brevity. By their very nature, we don’t* have any examples of existential threats actually happening, so we have to rely very heavily on counterfactuals, which aren’t the most reliable kind of reasoning. How can we reason about what conditions lead up to a nuclear war, for example? We have no data about what led up to one in the past, so we have to rely on abstractions like game theory and reasoning about how close to nuclear war we were in the past. But we need to develop some sort of policy to make sure it doesn’t kill us all either way.
*at a global scale at least. There are civilizations which completely died off (Rapa Nui is an example), but we have few of these, and they’re only vaguely relevant, even as far as climate change goes.
Writing well is really hard. Thanks for sharing.
Often the issue is that what you’re trying to predict is sufficiently important that you need to assume *something*, even if the tools you have available are insufficient. Existential risks generally fall in this category. Replacing the news with an upcoming cancer diagnosis, and telepathy with paying very careful attention to that organ, and whether Sylvanus is being an idiot is much less clear.
On the other hand, if someone is taking even odds on an extremely specific series of events, yeah, they’re kind of dumb. And I wouldn’t be surprised to find pundits doing this.
In a Bayesian context, seeking evidence is about narrowing the probability distribution from what should be a relatively flat prior. One could probably make a case for not making a decision until the cost of putting it off outweighs the gain by decreasing the uncertainty.
My take away is that we should actually only be counting EPM in these matches, rather than APM, and counting most/all of AlphaStar’s clicks as effective.
While this is a nice summary of classifier trade-offs, I think you are entirely too dismissive of the role of history in the dataset, and if I didn’t know any better, I would walk away with the idea that fairness comes down to just choosing an optimal trade-off for a classifier. If you had read any of the technical response, you would have noticed that when controlling for “recidivism, criminal history, age and gender across races, black defendants were 45 percent more likely to get a higher score”. Controls are important because they let you get at the underlying causal model, which is more important for predicting a person’s recidivism than what statistical correlations will tell you. Choosing the right causal model is not an easy problem, but it is at the heart of what we mean when we conventionally talk about fairness.
Surprised no one has brought up the Fourier domain representation/characteristic functions. Over there, convolution is just repeated multiplication, so what this gives is (ˆf(ωn1/2))n. Conveniently, gaussians stay gaussians, and the fact that we have probability distributions fixes ˆf(0)=1. So what we’re looking for is how quickly the product above squishes to a gaussian around ω=0, which looks to be in large part determined by the tail behavior of ˆf. I suspect what is driving your result of needing few convolutions is the fact that you’re working with smooth, mostly low frequency functions. For example, exp, which is pretty bad, still has a O(1n2) decay. By throwing in some jagged edges, you could probably concoct a function which will eventually converge to a gaussian, but will take rather a long time to get there (for functions which are piecewise smooth, the decay is O(1n).
One of these days I’ll take a serious look at characteristic functions, which is roughly the statisticians way of thinking about what I was saying. There’s probably an adaptation of the characteristic function proof of the CLT that would be useful here.