Random data point—https://ftx.com/trade/TRUMPFEB (“Trump is the President on Feb 1st, 2021”) is currently at 0.142 (14.2% probability it will happen)...
Anon User
In mathematics, axioms are not just chosen based of what feels correct—instead, the implications of those axioms are explored, and only if those seem to match the intuition too, then the axioms have some chance of getting accepted. If a reasonably-seeming set axioms allows you to prove something that clearly should not be provable (such as—in the extreme case—a contradiction), then you know your axioms are no good.
Axiomatically stating a particular ethical framework, then exploring the consequences of the axioms in the extreme and tricky cases can serve a similar purpose—if simingly sensible ethical “axioms” lead to completely unreasonable conclusions, then you know you have to revise the stated ethical framework in some way.
Perhaps also higher availability of testing and higher awareness means more people with mild symptoms get tested?
Well, this is Committee on Armed Services—obviously the adversarial view of things is kind of a part of their job description… (Not that this isn’t a problem, just pointing out that they are probably not the best place to look for a non-adversarial opinion).
More of an anecdote than research, but I recently became aware of Dr. A.J Cronin’s novel “The Citadel” published in 1937 and the claim that the book prompted new ideas about medicine and ethics, inspiring to some extent the UK NHS and the ideas behind it. Did not look into this much myself, but certainly a very fascinating story, if true.
The existence of the “do not throw good money after bad” idiom is indirect evidence that this kind of reframing is helpful in pursuading people against the fallacy, at least in some contexts.
First, poor have lower savings rate, and consume faster, so money velocity is higher. Second, minimal wages are local, and I would imagine that poor people on average spend a bigger fraction of their consumption locally (but I am not as certain about this one).
What are the “unnatural” deaths—are they things like car accidents? For those I’d expect them to actually go down pretty significantly because of the significantly reduced mobility.
Perhaps one aspect of minimum wage that you are missing is that this is different from price control of fungible goods is several important aspects, that everything else being equal:
Higher minimum wage means higher demand for goods consumed by minimum wage employees.
Higher minimum wage incentivises employers to invest more in their employee productivity (training, better work conditions, etc)
Same employees may be more productive if you pay them higher wages, and you may be able to get better employees.
In some cases 2+3 might means that there may be several equilibrium points that are roughly equally good for the employers—either hire high-turnover low-productivity people with lower wages, or hire lower-turnover higher-productivity people for higher wages, and effect #1 is enough for the higher minimum wage to just be a win-win (which is perhaps why some employers actually support minimum wage laws).
Your world descriptions and your objections seem to focus on HRAD being the only prerequisite to being able to create an aligned AGI, rather than simply one of them (and is the one worth focusing on because of a combination of factors, such as—which areas of research are the least attended to by other researches, which areas could provide insights useful to then attack other ones, which ones are the most likely to be on a critical path, etc). It could very well be an “overwhelming priority” as you stated the position you are trying to understand, without the goal being “to come up with a theory of rationality [...] that [...] allows one to build an agent from the ground up”.
I am thinking of the following optimization problem. Let R1 be all the research that we anticipate getting completed by the mainstream AI community by the time they create an AGI. Let R2 be the smallest amount of successful research such that R1+R2 allows you to create an aligned AGI. What research questions we know to formulate today, and have a way to start attacking today that are the most likely to be in R2? And among the top choices, which ones are also 1) more likely to produce insights that would help with other parts of R2, and 2) less likely to compress the AGI timeline even further? It seems possible to believe in HRAD being such a good choice (working backwards from R2) without being in one of your world’s (all of which work forward from HRAD).
I saw guidelines along the lines of “You can stop self-quarantining if you had two negative tests taken more than 24hrs apart, with first test at least 3 days after an exposure”. I do not know where this came from, but I saw it from an org that I would expect to be fairly sane in making evidence-based decisions.
I think you might be trying to apply the concept at a wrong granularity. Yes, there is often an iterative combination of the fundamental and applied, but then you need to be classifying each iterative step, rather than the white sequence, and the point is that it’s a “Pasteur-Edison” iteration, not a “Bohr-Edison” one. Almost any new fundamental advance has to go through the “Edison” phase as the technology readiness grows, before it becomes practical. This is true whether the advance came from “Bohr” quadrant, it “Pasteur” one. The distinction is whether you are mindful of the potential applications when you were embarking on doing the fundamental part (“Pasteur”), or whether the practical implications were only figured out after the fact (“Bohr”). The distinction becomes particularly pronounced when the research effort is only proposed, and you are asking for funding.
Another issue to consider is that the test could have a high false negative rate (I have seen reports as high as 15% - e.g. https://www.npr.org/sections/health-shots/2020/04/21/838794281/study-raises-questions-about-false-negatives-from-quick-covid-19-test), and it appears that false positives are more likely for asymptomatic people.
I wonder whether you may be conflating two somewhat distinct (perhaps even orthogonal) challenges not modeled in the CIDR model:
Human actions may be reflecting human values very imperfectly (or worse—can be an imperfect reflection of inconsistent conflicting values).
Some actions by AI may damage the human, at which point the human actions may stop being meaningfully correlated with the value function. This is a problem that would have still be relevant if we somehow found an ideal human capable of acting on their values in a perfectly rational manner.
The first challenge “only” requires the AI to be better at deducing the “real” values. (“Only” is in quotes because it’s obviously still a major unsolved problem, and “real” is in quotes because it’s not a given what that actually means.). The second challenge is about AI needing to be constrained in its actions even before it knows the value function—but there is at least a whole field of Safe RL on how do do this for much simpler tasks, like learning to move a robotic arm without breaking anything in the process.
My job is related to AI safety and I do not have employer’s permission to discuss any details of my work. I do not intend to do it anyway, but being anonymous reduces the chances of something being misinterpreted, taking out of context, etc and causing trouble for me.
Even unrelated to my employment, my default policy is to be very careful about anything I say publicly under my real name—particularly if it has any chance to be seen as controversial (again, vary of repotational risks). Using an alias reduces the transaction cost of posting (still have to think twice sometimes, but do not have to policy my posts as hard).
But don’t you see—those infections are a second wave, so do not have to be counted. The model is almost tautologically true that way. But terribly misleading, and very irresponsibly so.
They are not very explicit about it (which is a huge problem by itself), but they seem to be saying that they are only predicting the “first wave”—so they are not predicting 0 deaths after July—they just defining them to not be a part of the “first wave” anymore. So the way they present the model predictions is even more unbelievably wrong than the model itself!
There are already free online filing options for people with incomes up to 69K. https://www.irs.gov/filing/free-file-do-your-federal-taxes-for-free
One big difference is that “having enough food” admits a value function (“quantity of food”) that is both well understood and for the most part smooth and continuous over the design space, given today’s design methodology (if we try to design a ship with a particular amount of food and make a tiny mistake it’s unlikely that the quantity of food will change that much). In contrast, the “how well is it aligned” metric is very poorly understood (at least compared with “amount of food on a spaceship”) and a lot more discontinuous (using today’s techniques of designing AIs, a tiny error in alignment is almost certain to cause catastrophic failure). Basically—we do not know what exactly if means to get it right, and even if we knew, we do not know what the acceptable error tolerances are, and even if we knew, we do not know how to meet them. None of that applies to the amount of food on a spaceship.
I think your analysis of “you’re only X because of Y” is missing the “you are doing it wrong” implicit accusation in the statement. Basically, the implied meaning, I think, is that while there are acceptable reasons to X, you are lacking any of them, but instead your reason for X is Y, which is not one of the acceptable reasons. Which is why your Z is a defense—claiming to have reasons in the acceptable set. And another defense might be to respond entirely to the implied accusation and explain why Y should be an OK reason to X. “You’re only enjoying that movie scene because you know what happened before it”—“Yeah, and what’s wrong with that?”