I don’t think this particular idea is much of a concern (people could already profit from assassinating CEOs just by shorting the company’s stock on the ordinary market, yet they do not...), but I would be interested to see some metaculus questions (or just community discussion, as here) about some of the key cruxes I’m wondering about. Like “how much would liquidity increase/decrease if funds were stored in an S&P 500 index instead of just sitting in cash?”, or perhaps “Which country will be first to see more than $100 million of daily of volume (or whatever) on legal prediction platforms?”
Cross-posting my long comment from the EA Forum:
I always appreciate your newsletter, and agree with your grim assessment of prediction markets’ long-suffering history. Here is what I am left wondering after reading this edition:
Okay, so the USA has mostly dropped the ball on this for forty years. But what about every other country? China seems pretty ambitious and willing to make things happen in order to secure their place on the world stage—where is the CCP-subsidized market hive-mind driving all the crucial central planning decisions? Well, maybe a prediction market doesn’t play well with wanting to exert lots of top-down control and suppress free speech. Okay, what about countries in Europe? What about Taiwan or Singapore? Nobody has yet achieved some kind of Hansonian utopia, so what is the limiting factor?
Maybe there is a strong correlation between different countries’ policy decisions caused by a ‘global elite culture’ that picks similar policies everywhere, and occasionally screws up by all deciding to ignore the potential of nuclear power, all choosing the same solutions to covid-19, etc. This is Hanson’s idea; I find it a bit conspiratorial compared to my framing in the next point. (But if it’s true, what are we to do? Perhaps try to build up EA/rationalism as a movement until we can influence global elite culture for the better? Or maybe become advisers to potential outliers from global elite culture, like the Saudi Arabian monarchy or something? Or somehow construct whole alternative institutions using stuff like crypto and charter cities, “Atlas Shrugged” style?)
Maybe it’s less about ‘elite’ culture and more about universal human biases. Perhaps prediction markets are abhorrent to normal folks insofar as they offend traditional status hierarchies (by holding people too closely to their word and showing the hypocrisy of leaders, or something), and that causes them to be rejected wherever they are tried. That might sound nutty, but personally I think a big part of why healthcare systems are so complicated and expensive is because so many reform ideas that look great on paper run into universal deeply-engrained cultural preferences—for instance, the taboo tradeoff between money & lives prevents healthcare systems from making price information too obvious or making inequalities in care quality too visible. Similarly, the human desire to show that we care causes us to overspend late in life when help would not do much good, and underspend on some cheap preventative things (like doing more to push people towards better exercise/diet/etc). If this is true for prediction markets, what do we do? Maybe it suggests that we should start building the prediction-market future starting from stock markets (via services like Kalshi), since stock markets are (grudgingly!) tolerated by human culture, rather than trying to persuade governments or corporations to adopt them (where it is too easy for a leader to veto it based on their gut opposition).
Maybe it’s misleading to frame the question this way, as “why is EVERY COUNTRY failing in the SAME WAY”, because most prediction market advocates have all been inside the USA/anglosphere, so other countries haven’t really had a fair shot at being persuaded? (In this case, maybe all that’s necessary is to fund some prediction-market advocacy groups in Taiwan, Singapore, India, Dubai, South Korea, and other diverse locations until somebody finally takes the offer! Then, once one country is doing it, that will make it easier for the innovation to spread elsewhere.)
Maybe there’s no special explanation, and every nation is just failing for its own distinct reason, just because governments aren’t that competent and reform is hard and the possibility space of failure is much larger than the small target of success. Countries make dumb decisions all the time and are constantly leaving large amounts of potential economic growth on the table, just because life is tough and it’s hard to make good decisions. Null hypothesis! In which case we just need to try harder and then our dreams might come true (even here in the USA, despite the grim history of defeats). This hypothesis gets stronger when you consider that the existing community of prediction-market advocacy is quite small and there is not much funding in it.
Maybe prediction markets are somehow not ready for prime-time, lacking some crucial feature that would speed adoption and retrospectively seem like an obvious part of the complete package? (In this case, the challenge is to figure out what feature to add or how to otherwise tweak the design to get product/market fit, and then it’s off to the races. Hanson has often identified “the difficulty of finding a customer willing to pay for info” as the most difficult piece of the puzzle. Hence the appeal of corporate prediction markets more usable, or of persuading governments to subsidize prediction markets on topics of public interest. This reasoning also lines up with your complaint about how, without a customer willing to subsidize the market, markets gravitate towards covering meaningless emotional topics like sports, where people are irrational.) Maybe we need to make prediction markets easier to use so they can be adopted more easily by corporate customers (my “gitlab of prediction markets” idea), or maybe we need to increase liquidity of markets by lowering fees and doing things like parking the invested funds in the S&P 500 so that prediction stops being a zero-sum game. Maybe we just need to figure out better and better ways around the regulations banning prediction markets. Et cetera.
Of course I would be eager to hear your thoughts on what the key limiting factor(s) might be.
In your last newsletter, you remarked, “It’s kind of interesting how $40k [given away by Astral Codex Ten Grants] feels like a significant quantity of all the funding there is for small experiments in the forecasting space. This is probably suboptimal.” What prediction-market experiments would you be most interested to see run?
Do you think that advocacy/lobbying for prediction markets (perhaps in other countries as mentioned) is a worthwhile endeavor? Or do you think that would be less effective than experimenting with different market designs?
Robin Hanson wanted to run a “Fire the CEO” conditional prediction market about companies’ stock price if the CEO did / did not resign by the end of the quarter. I guess the plan would be to initially subsidize the market yourself, then prove the worth of the idea, then run a service where companies pay you to be included among your many company-specific markets. Do you think this is a promising plan? Is money the biggest roadblock, or would it be illegal to offer this service in the USA / without the companies’ permission / etc? (Could we just run it in the UK or something?)
FTX is a huge crypto exchange, they’ve already done prediction markets on the past for presidential elections, and they’re apparently EA-aligned. Could we ask them if they’d please run other prediction markets that we think would be useful, such as a conditional prediction market about the stock market’s performance conditional on a presidential election outcome, or a prediction market about some high-minded EA-relevant topic?
There is kind of a difference between “studying forecasting to benefit EA” (which seems to describe many of your projects at Quantified Uncertainty Research Institute, such as creating software to help grantmakers estimate impact), versus “prediction markets as an EA cause area” (under the heading of “improving institutional decision-making”, progress studies, and improving “civilizational adequacy”). How do you feel about the relationship between the two, and which side do you tend to feel is the better place to focus effort in terms of impact?
This is a lot of questions, so no pressure to respond to everything—I mostly intend this as food for thought. Also, I wrote this post casually, but let me know if you think it would be good to rework as a top-level post (ie, if you think these are good questions that people should be thinking more about).
You might be interested in this post on the EA Forum advocating for the potential of free/open-source software as an EA cause or EA-adjacent ally movement (see also my more skeptical take in the comments). https://forum.effectivealtruism.org/posts/rfpKuHt8CoBtjykyK/how-impactful-is-free-and-open-source-software-development
I also thought this other EA Forum post was a good overview of the general idea of altruistic software development, some of the key obstacles & opportunities, etc. It’s mostly focused on near-term projects for creating software tools that can improve people’s reasoning and forecasting performance, not cybersecurity for BCIs or digital people, but you might find it helpful for giving an in-depth EA perspective on software development that sometimes contrasts and sometimes overlaps with the perspective of the open-source movement: https://forum.effectivealtruism.org/posts/2ux5xtXWmsNwJDXqb/ambitious-altruistic-software-engineering-efforts
Eliezer’s point is well-taken, but the future might have lots of different kinds of software! This post seemed to be mostly talking about software that we’d use for brain-computer interfaces, or for uploaded simulations of human minds, not about AGI. Paul Christiano talks about exactly these kinds of software security concerns for uploaded minds here: https://www.alignmentforum.org/posts/vit9oWGj6WgXpRhce/secure-homes-for-digital-people
In the traditional view a person is free. He is autonomous in the sense that his behavior is uncaused.That view, together with its associated practices, must be re-examined when a scientific analysis reveals unexpected controlling relations between behavior and environment. By questioning the control exercised by autonomous man and demonstrating the control exercised by the environment, a science of behavior also seems to question dignity or worth. A person is responsible for his behavior, not only in the sense that he may be justly blamed or punished when he behaves badly, but also in the sense that he is to be given credit and admired for his achievements. A scientific analysis shifts the credit as well as the blame to the environment, and traditional practices can then no longer be justified. (These are sweeping changes, and those who are committed to traditional theories and practices naturally resist them.)As the emphasis shifts to the environment, the individual seems to be exposed to a new kind of danger: who is to construct the controlling environment, and to what end? Autonomous man presumably controls himself in accordance with a built-in set of values; he works for what he finds “good”. But what will the putative controller find “good”, and will it be good for those he controls?-- B.F. Skinner, 1971
In the traditional view a person is free. He is autonomous in the sense that his behavior is uncaused.
That view, together with its associated practices, must be re-examined when a scientific analysis reveals unexpected controlling relations between behavior and environment. By questioning the control exercised by autonomous man and demonstrating the control exercised by the environment, a science of behavior also seems to question dignity or worth.
A person is responsible for his behavior, not only in the sense that he may be justly blamed or punished when he behaves badly, but also in the sense that he is to be given credit and admired for his achievements.
A scientific analysis shifts the credit as well as the blame to the environment, and traditional practices can then no longer be justified. (These are sweeping changes, and those who are committed to traditional theories and practices naturally resist them.)
As the emphasis shifts to the environment, the individual seems to be exposed to a new kind of danger: who is to construct the controlling environment, and to what end?
Autonomous man presumably controls himself in accordance with a built-in set of values; he works for what he finds “good”. But what will the putative controller find “good”, and will it be good for those he controls?
-- B.F. Skinner, 1971
You might enjoy this portion of a video I made about the videogame The Witness, where I analyze the above quote in the context of themes of free will and how we should think about ideally structuring society, knowing about the systems of social control and prediction that you describe. In the portion of the video I linked, I first discuss a contrasting quote by Douglas Hofstadter, then I play the above Skinner quote, and then I try to analyze the conflict between an individual-centered versus top-down social-theory-centered view of the world. But ultimately, there must be a way of merging both views, since we know that societal influences are powerful but we also know that individuals are capable of making thoughtful choices based on reasoned deliberation and moral principles.
You might be interested in this 80,000 Hours podcast about the extreme moral uncertainty created by our complex world, and the (tounge-in-cheek) “moral case against ever leaving the house”. I agree that it can be dizzying to think about how our deep uncertainty about the future (which philosopher Hillary Greaves calls “moral cluelessness”) seems to potentially undermine all our efforts—not just our altruistic endeavors, but what we seek to accomplish in our jobs, in our personal relationships, etc.
But the logic of expected value maximization tells us that we ought to go forth regardless, and make an effort according to our best judgement even in a shifting landscape that threatens to sometimes reverse the effect of our actions. To me, this embrace of uncertainty (as opposed to the obsession with blame, credit, and false guarantees that cloud much moral thinking) is a core part of what it means to be an effective altruist and a rationalist. Indeed, I think the embrace of uncertainty is a big part of the “edge” that allows effective-altruist donations to do such incredible amounts of good on average.
Here is the EA Forum tag page for “Moral Uncertainty”, which collects a bunch of thoughtful posts on the subject you might be interested in.
It is a pretty big ask of individuals (who perhaps are making a blog post with a list of yearly predictions, in the style of Slate Star Codex, theZvi, Matt Yglesias, or others) to do all this math in order to generate and evaluate Normal Predictions of continuous variables. I think your post almost makes more sense as a software feature request—more prediction markets and other platforms should offer Metaculus-style tools for tweaking a distribution as a way of helping people generate a prediction:
It would be awesome if a prediction market let me bet on the 2024 election by first giving me a little interactive tool where I could see how my ideas about the popular vote probability distribution might translate into a Democratic or Republican victory. Then I could bet on the binary-outcome market after exploring my beliefs in continuous-variable land.
Perhaps blogging platforms like LessWrong or Substack could have official support for predictions—maybe you start by inputting your percentage chance for a binary outcome, and then the service nudges you about whether you’d like to turn that prediction into a continuous variable, to disambiguate cases like your example of “N(50.67, 0.5), N(54, 3), N(58, 6)” all giving 91% odds of a win. This seems like a good way of introducing people to the unfamiliar new format of Normal Predictions.
“Suggestion is to use saliva for rapid tests to get that time back, but by the time we do that, I’m assuming the wave will already be over.”
I’m confused; is this saying:
That we should use saliva-based PCR tests instead of nasal-swab rapid tests? (But PCRs are expensive and their turnaround time is often terrible?)
That we should manufacture new types of rapid tests which can use saliva?
That existing rapid tests can already use saliva, and we should just start popping rapid tests in our mouths like lollipops instead of sticking them up our noses??
I’m guessing the correct answer is #2, even though #3 would be by far the most fun and convenient thing to be true.
How can it be decades away if a couple of random “transhumanist” couples are already doing it? Mass adoption might be decades away, but lesswrongers are weird people who are often interested in early-adopting new technologies (like cryptocurrency, cryonics, etc). https://www.geneticsandsociety.org/biopolitical-times/first-polygenic-risk-score-baby
Shouldn’t it be possible (with effort) to have polygenically-selected children right now? Companies like LifeView are already open for business. Yes, it requires expensive IVF, and you have to compute the intelligence scores yourself since LifeView is ostensibly only about health scoring. But neither of those friction points seems likely to change much in the next 5 years. So I think the answer might be, rather than wait until it gets easy and everyone does it, if you want polygenomic selection you should put in the work now and get the competitive benefits of being an early-adopter!
Of course, this is much easier said than done. I’m looking to have kids in the next few years and my wife and I are very interested in PGD. I would greatly appreciate if anyone who goes through the steps before us could post a detailed “how-to” guide walking future parents through the process.
See the similar comment here.
Personally, I think that we can do better than starting a nuclear war (which, after all, just delays the problem, and probably leaves civilization in an even WORSE place to solve alignment when the problem eventually rears its head again—although your idea about disaster-proofing MIRI and other AI safety orgs is interesting), as I said in a reply to that comment.
Trying to reduce Earth’s supply of compute (including through military means), and do other things to slow down the field of AI (up to and including the kind of stuff that we’d need to do to stop the proliferation of Nick Bostrom’s “easy nukes”) seems promising. Then with the extra time that buys, we can make differential progress in other areas:
Alignment research, including searching for whole new AGI paradigms that are easier to align.
Human enhancement via genetic engineering, BCIs, brain emulation, cloning John Von Neumann, or whatever.
Better governance tech (prediction markets, voting systems, etc), so that the world can be governed more wisely on issues of AI risk and everything else.
But just as I said in that comment thread, “I’m not sure if MIRI / LessWrong / etc want to encourage lots of public speculation about potentially divisive AGI ‘nonpharmaceutical interventions’ like fomenting nuclear war. I think it’s an understandably sensitive area, which people would prefer to discuss privately.”
I think there are some ways of flipping tables that offer some hope (albeit a longshot) of actually getting us into a better position to solve the problem, rather than just delaying the issue. Basically, strategies for suppressing or controlling Earth’s supply of compute, while pressing for differential tech development on things like BCIs, brain emulation, human intelligence enhancement, etc, plus (if you can really buy lots of time) searching for alternate, easier-to-align AGI paradigms, and making improvements to social technology / institutional decisionmaking (prediction markets, voting systems, etc).
I would write more about this, but I’m not sure if MIRI / LessWrong / etc want to encourage lots of public speculation about potentially divisive AGI “nonpharmaceutical interventions” like fomenting nuclear war. I think it’s an understandably sensitive area, which people would prefer to discuss privately.
Agreed that in the long run, these kind of slow-rolling dysgenic effects are no big deal:
Polygenic selection and other genetic tech are already powerful enough to counter dysgenic effects, and will only become stronger with time.
Even if there was no ability to genetically fix dysgenic effects, our society is probably improving in other ways at a fast enough clip to overcome the decay (ie, medical tech advancing faster than our health declines; education & information technology more than making up for declines in intelligence, etc).
More generally, the world is likely moving too fast for slow-rolling effects like genetic stuff (or other things, like the impact of immigration on the voting patterns of a country or the impact of CO2 emissions on climate) to matter overwhelmingly in the long run. By the year 2200, the world will likely be some combination of technological utopia & devastated post-catastrophe ruins—the slow-rolling stuff will not matter much either way.
Of course, genetics is relevant to understanding history and having a good model of the world, which is relevant to policy and etc. It would be foolish to try and encourage everyone to be a basketball star, given that height is so genetically determined. (It would be equally foolish to punish people who came up short as if this made them sinful or unworthy.)
So, I think that genetics as a tool for forward predictions of dysgenic effects is not super helpful/relevant. But if you are going to make forward predictions anyways, it is important to look beyond just population averages. There is still plenty of selection even in our modern-day world of abundance—it is just based on assortative mating and the accumulation of money/power/prestige, rather than the barbarism of who lives vs who dies in a world of Malthusian subsistence poverty. So the burden of accumulating mutations does not fall equally on all parts of society, but presumably filters down to the bottom. Similarly, it’s true that fertility declines with increasing wealth, but then it starts shooting up again above a household income of around $200K/year. Those high-fertility rich people are not a big part of the population (only around 1%-2%), but the rich obviously have an outsized influence on policy, economic growth, probably culture, etc. These effects will not give us an “idiocracy” society of general decay—instead they will give us a more extremely unequal society (which will get even more unequal when some people start using polygenic selection and others don’t).
I think the short-term plan for dealing with this increasing inequality is to just keep piling more and more money into redistribution and charity. IMO this plan has been working reasonably well for the past 40 years at least—pre-tax income inequality has gone up, but increasing taxes for redistribution have meant that post-taxes-and-transfers income inequality has actually stayed remarkably flat since the 1960s. For now, we can just keep passing more taxes and creating new redistributive schemes, plus hopefully reforming the system for extra efficiency (like moving some cumbersome and restrictive benefits to a more flexible and simpler UBI).
The long-term plan is somewhere on a spectrum between “genetic / cybernetic technology lets us finally fix all the problematic aspects of genetics and natural selection” and “lol, of course there’s no long term plan, since when do humans have a functional long-term plan for anything!” (It is justified in this case to lack a long-term plan, since genetic trends are so slow-rolling as discussed earlier… by the time the short-term plan breaks, we will have other bigger problems.)
I appreciate the structured, concise, almost fully bullet-point format of this post. Bullet points are underutilized as a viable writing style for presenting finished work!
Zvi, what are your thoughts on covid in the USA during the winter?
On the one hand:
The delta wave is ending, and there are no new variants on the horizon.
Vaccinations rates are high and slowly rising.
The overall rate of immunity (from vaccines + natural infection) is high and probably rising (although this is a fight between fading vaccine effectiveness vs natural infections & vaccine booster shots).
On the other hand, winter is traditionally the worst time for colds and flus, including the monster covid wave of 2020. It seems hard to believe we’ll skate through winter 2021 without somewhat of a bump in covid cases.
If you have thoughts about the course of covid beyond this winter (like the prospect for future variants or how necessary it will be for most of the population to take booster shots at a regular cadence), I’d be interested in that too.
One thing that I think will be consequential, in a kind of hilarious way, is that we’re probably going to skip two flu seasons in a row, which will possibly set us up for a whopper flu season down the road. Last winter the flu was practically nonexistent, crowded out by covid. This winter, based on my read of this Metaculus forecast, the flu season is expected to be only half as intense as a typical pre-covid year—peaking at around “4% ILI” instead of around 8%.
If we get a whopper flu season in 2022 or 2023 (perhaps 2-3x worse than normal), it will be interesting to watch how the media and culture responds—will they go into covid-esque hysterics about the overcrowded hospitals and demand flu lockdowns? Or will it act as a nudge in the other direction, convincing people that the lockdowns and NPIs have got to stop somewhere? Or, perhaps we’ll never get a whopper flu season—maybe Delta will remain the transmissibility king of all the infectious viruses, even after stabilizing and going endemic? Perhaps our transition to remote work, covid-caution in shared indoor spaces, and mRNA vaccines means a permanently more hygienic world?
Overall, I’m very interested in the question of whether covid cases ever crash to suppression levels (as would perhaps have happened in a world without the Delta variant), versus only very slowly trailing off into the background of ordinary pre-covid colds & flus. I think this is very important for predicting how culture will evolve going forwards. A sharper transition from pandemic to negligible covid would encourage more of a snap-back to pre-covid “normality”: more mass concerts, more comfort visiting shared indoor spaces, fewer masks and NPIs, less remote work. Versus a world where covid lingers interminably and there’s no sharp transition into the post-pandemic world will make it harder for culture to coordinate a “return to normal”.
I’m not sure which side I’m cheering for, but it’s clearly an important question regardless. (Remote work and better tech adoption across the board have been highlights of pandemic culture. A slightly increased focus on scientific progress is obviously welcome. And the pre-covid world sometimes strikes me as being a little too heavy on present consumption, like travel vacations, with not enough long-term focus. But of course all the masking and social distancing and reduced socializing has been miserable, and the madness of constantly-changing restrictions is terrible for both business and living an enjoyable human life.)
Well, ironic to the extent that:
it is about abstract intellectual ideas vs going out and doing the stuff as jacob exhorts us to do
in that sense it is arguably more on the “personal development” side of things
it is a monklike, non-social activity
Anti-ironic (english doesn’t really have a word for this… like when something is oddly fitting, like if someone named “James Baker” is actually a baker) insofar as LessWrong / rationalism is a pretty strong shared intellectual culture and that these seemingly solitary monkish endeavors are actually a space for social connection, thus perhaps we are fulfilling Jacob’s exhortation.
I am pleased by the charming irony (and… anti-irony??) of this post. A complex point-by-point commentary on the writings of putanumonit on loneliness, this post recalls the intellectual traditions of ancient monks (a comparison that Jacob himself has made elsewhere: https://putanumonit.com/2021/04/03/monastery-and-throne/). As the author notes, writing like this is both a solitary endeavor and an oddly communal activity that demonstrates the depth of connection possible in distributed intellectual movements (whether modern rationalism or the medieval world of IRL monasteries). Of course it’s intrinsically a bit silly to be enumerating the logical and psychological complexities of an exhortation to just get out there and actually socialize. But I’m obviously not too bothered by that silliness, because I’m here doing the same thing!
Perhaps the equanimous and incredibly joyful monk was within us the whole time?
The best resource I’ve found on this topic is a comprehensive investigation into both the nature of the problem and a list of potential solutions, by rationalist blog Nintil:
How many acres burn every year? Is it getting worse?
Is this data valid?
The role of climate change
Vapor Pressure Deficit
The historical fire record
Why do the fires start?
Natural fires matter: health costs
The WUI conundrum
Who is in charge where?
Trends elsewhere. Is it getting worse in general?
Fire suppression, fire exclusion and prescribed fire
The firefighting trap
Were Native Americans doing it?
So is it fire exclusion or climate change?