Engineer working on next-gen satellite navigation at Xona Space Systems. I write about effective-altruist and longtermist topics at nukazaria.substack.com, or you can read about puzzle videogames and other things at jacksonw.xyz
Jackson Wagner
“What if we could redesign society from scratch? The promise of charter cities.” [Rational Animations video]
Taking Good Heart Tokens Seriously, So Help Me God
[Question] What Twitter fixes should we advocate, now that Elon is on the board?
Cross-posting my long comment from the EA Forum:
I always appreciate your newsletter, and agree with your grim assessment of prediction markets’ long-suffering history. Here is what I am left wondering after reading this edition:
Okay, so the USA has mostly dropped the ball on this for forty years. But what about every other country? China seems pretty ambitious and willing to make things happen in order to secure their place on the world stage—where is the CCP-subsidized market hive-mind driving all the crucial central planning decisions? Well, maybe a prediction market doesn’t play well with wanting to exert lots of top-down control and suppress free speech. Okay, what about countries in Europe? What about Taiwan or Singapore? Nobody has yet achieved some kind of Hansonian utopia, so what is the limiting factor?
Maybe there is a strong correlation between different countries’ policy decisions caused by a ‘global elite culture’ that picks similar policies everywhere, and occasionally screws up by all deciding to ignore the potential of nuclear power, all choosing the same solutions to covid-19, etc. This is Hanson’s idea; I find it a bit conspiratorial compared to my framing in the next point. (But if it’s true, what are we to do? Perhaps try to build up EA/rationalism as a movement until we can influence global elite culture for the better? Or maybe become advisers to potential outliers from global elite culture, like the Saudi Arabian monarchy or something? Or somehow construct whole alternative institutions using stuff like crypto and charter cities, “Atlas Shrugged” style?)
Maybe it’s less about ‘elite’ culture and more about universal human biases. Perhaps prediction markets are abhorrent to normal folks insofar as they offend traditional status hierarchies (by holding people too closely to their word and showing the hypocrisy of leaders, or something), and that causes them to be rejected wherever they are tried. That might sound nutty, but personally I think a big part of why healthcare systems are so complicated and expensive is because so many reform ideas that look great on paper run into universal deeply-engrained cultural preferences—for instance, the taboo tradeoff between money & lives prevents healthcare systems from making price information too obvious or making inequalities in care quality too visible. Similarly, the human desire to show that we care causes us to overspend late in life when help would not do much good, and underspend on some cheap preventative things (like doing more to push people towards better exercise/diet/etc). If this is true for prediction markets, what do we do? Maybe it suggests that we should start building the prediction-market future starting from stock markets (via services like Kalshi), since stock markets are (grudgingly!) tolerated by human culture, rather than trying to persuade governments or corporations to adopt them (where it is too easy for a leader to veto it based on their gut opposition).
Maybe it’s misleading to frame the question this way, as “why is EVERY COUNTRY failing in the SAME WAY”, because most prediction market advocates have all been inside the USA/anglosphere, so other countries haven’t really had a fair shot at being persuaded? (In this case, maybe all that’s necessary is to fund some prediction-market advocacy groups in Taiwan, Singapore, India, Dubai, South Korea, and other diverse locations until somebody finally takes the offer! Then, once one country is doing it, that will make it easier for the innovation to spread elsewhere.)
Maybe there’s no special explanation, and every nation is just failing for its own distinct reason, just because governments aren’t that competent and reform is hard and the possibility space of failure is much larger than the small target of success. Countries make dumb decisions all the time and are constantly leaving large amounts of potential economic growth on the table, just because life is tough and it’s hard to make good decisions. Null hypothesis! In which case we just need to try harder and then our dreams might come true (even here in the USA, despite the grim history of defeats). This hypothesis gets stronger when you consider that the existing community of prediction-market advocacy is quite small and there is not much funding in it.
Maybe prediction markets are somehow not ready for prime-time, lacking some crucial feature that would speed adoption and retrospectively seem like an obvious part of the complete package? (In this case, the challenge is to figure out what feature to add or how to otherwise tweak the design to get product/market fit, and then it’s off to the races. Hanson has often identified “the difficulty of finding a customer willing to pay for info” as the most difficult piece of the puzzle. Hence the appeal of corporate prediction markets more usable, or of persuading governments to subsidize prediction markets on topics of public interest. This reasoning also lines up with your complaint about how, without a customer willing to subsidize the market, markets gravitate towards covering meaningless emotional topics like sports, where people are irrational.) Maybe we need to make prediction markets easier to use so they can be adopted more easily by corporate customers (my “gitlab of prediction markets” idea), or maybe we need to increase liquidity of markets by lowering fees and doing things like parking the invested funds in the S&P 500 so that prediction stops being a zero-sum game. Maybe we just need to figure out better and better ways around the regulations banning prediction markets. Et cetera.
Of course I would be eager to hear your thoughts on what the key limiting factor(s) might be.
In your last newsletter, you remarked, “It’s kind of interesting how $40k [given away by Astral Codex Ten Grants] feels like a significant quantity of all the funding there is for small experiments in the forecasting space. This is probably suboptimal.” What prediction-market experiments would you be most interested to see run?
Do you think that advocacy/lobbying for prediction markets (perhaps in other countries as mentioned) is a worthwhile endeavor? Or do you think that would be less effective than experimenting with different market designs?
Robin Hanson wanted to run a “Fire the CEO” conditional prediction market about companies’ stock price if the CEO did / did not resign by the end of the quarter. I guess the plan would be to initially subsidize the market yourself, then prove the worth of the idea, then run a service where companies pay you to be included among your many company-specific markets. Do you think this is a promising plan? Is money the biggest roadblock, or would it be illegal to offer this service in the USA / without the companies’ permission / etc? (Could we just run it in the UK or something?)
FTX is a huge crypto exchange, they’ve already done prediction markets on the past for presidential elections, and they’re apparently EA-aligned. Could we ask them if they’d please run other prediction markets that we think would be useful, such as a conditional prediction market about the stock market’s performance conditional on a presidential election outcome, or a prediction market about some high-minded EA-relevant topic?
There is kind of a difference between “studying forecasting to benefit EA” (which seems to describe many of your projects at Quantified Uncertainty Research Institute, such as creating software to help grantmakers estimate impact), versus “prediction markets as an EA cause area” (under the heading of “improving institutional decision-making”, progress studies, and improving “civilizational adequacy”). How do you feel about the relationship between the two, and which side do you tend to feel is the better place to focus effort in terms of impact?
This is a lot of questions, so no pressure to respond to everything—I mostly intend this as food for thought. Also, I wrote this post casually, but let me know if you think it would be good to rework as a top-level post (ie, if you think these are good questions that people should be thinking more about).
A bridge to Dath Ilan? Improved governance on the critical path to AI alignment.
You claim that “just fight the war” is a wasteful and inefficient way to defend against invasion compared to clever strategies like taking out the enemy’s leadership or deploying a propaganda campaign to change the invading nation’s opinion about your nation’s citizens. But doesn’t most invasion-defense mostly consist of just fighting the war? If assassinations are so easy and are obviously the right thing to do, shouldn’t they happen more often? When was the last time assassinations were used to end a war anywhere? Without examples, the ideas in this post seem unmoored from any real assessment about what’s hard vs easy.
As for cultural/propaganda solutions, these all seem far too slow. Once the enemy’s tanks are rolling, the war will be decided in a matter of days or weeks—no time to go about changing the cultural attitudes of an entire population! (And how might we expect to shift their attention to domestic issues overnight, when we have to compete with the headline that their country has just declared war??) I could see some of these defensive tactics working as a way to try and prevent invasion from ever occurring in the first place (like Taiwan’s situation), or as a way to make the best of a small international incident (like how the occasional India/Pakistan flare-ups are played by both governments to score domestic political points), or that they would become relevant amid a long, drawn-out stalemate. But if you’re the victim of a fast-moving surprise invasion, no clever cultural shenanigans are going to stop the hard power streaming across your borders.
X-Risk, Anthropics, & Peter Thiel’s Investment Thesis
Charter Cities: why they’re exciting & how they might work
(heavy spoilers for ending of HPMOR):
HPMOR takes place in the 1990s, and importantly takes place before most people realized that the mysterious Quirrell was actually none other than the all-powerful nefarious amoral supergenius behind Lord Voldemort. Presumably, the exchange value of Quirrell points fluctuated over time—low during periods when they only seemed useful for getting favors from an eccentric Defense Professor, high as the Defense Professor became increasingly well-known for his extreme competence and mysterious proximity to important events at Hogwarts, then reaching an astronomically high value when it seemed that Quirrell was on the cusp of achieving total domination over all of human civilization forever, then finally crashing to around the current value after Quirrell was defeated and imprisoned indefinitely until such time as he might be safely healed.
Economists debate whether the current market value of Quirrell points derives more from the possibility of receiving favors from Quirrell in future scenarios where he is revived (whether during a utopian far-future or a disastrous near-term return of Lord Voldemort), or merely from the fact that since only around 21 thousand Quirrell points were minted before its anonymous founder disappeared, Quirrell points form a sound monentary basis for a noninflationary store of value.
None other than Peter Thiel wrote a huge essay about investing while under anthropic shadow, and I wrote a post analyzing said essay! It is interesting, although pretty abstract in a way that probably makes it more relevant to organizations like OpenPhilanthropy than to most private individuals. Some quotes from Thiel’s essay:
Apocalyptic thinking appears to have no place in the world of money. For if the doomsday predictions are fulfilled and the world does come to an end, then all the money in the world — even if it be in the form of gold coins or pieces of silver, stored in a locked chest in the most remote corner of the planet — would prove of no value, because there would be nothing left to buy or sell. Apocalyptic investors will miss great opportunities if there is no apocalypse, but ultimately they will end up with nothing when the apocalypse arrives. Heads or tails, they lose. …A mutual fund manager might not benefit from reflecting about the danger of thermonuclear war, since in that future world there would be no mutual funds and no mutual fund managers left. Because it is not profitable to think about one ’s death, it is more useful to act as though one will live forever.
Since it is not profitable to contemplate the end of civilization, this distorts market prices. Instead of telling us about the objective probabilities of how things will play out, prices are based on probabilities adjusted by the anthropic logic of ignoring doomed scenarios:
Let us assume that, in the event of [the project of civilization being broadly successful], a given business would be worth $ 100/share, but that there is only an intermediate chance (say 1:10) of that successful outcome. The other case is too terrible to consider. Theoretically, the share should be worth $ 10, but in every world where investors survive, it will be worth $100. Would it make sense to pay more than $10, and indeed any price up to $100? Whether in hope or desperation, the perceived lack of alternatives may push valuations to much greater extremes than in nonapocalyptic times.
See my post for more.
I totally second this. A couple facts about my own routine:
I’ve been using a Quest 2 for regular (2x-3x weekly) brief exercise sessions for about a year. In combination with occasional (0.5x-1x weekly) traditional strength training routine and some jogging around the park, this is the most fun I’ve ever had exercising and the most consistent that I’ve ever been about it, although I still wish I was doing more.
I use this basic pair of weighted gloves when I play VR, which makes games like Beat Saber much more of a workout! I was initially put off by people talking about how it might be bad to use wrist weights because they could cause injury, but I think this is referring to much weights much heavier than 1lb per hand. In my experience, I haven’t come close to anything that feels dangerous—if you wouldn’t fear for your wrists while dancing and holding a pair of largeish apples, you shouldn’t have anything to worry about from using relatively light weighted gloves. I also got some of this foam-exercise-mat stuff to create a defined space in my garage for exercise and VR, which has been nice.
I use VR games not just as a workout in themselves, but also as a reward to get me to do more ordinary types of exercise. ie, I’ll promise myself “okay—get through this 1-hour r/bodyweightfitness routine, and then you can chill with an interesting non-exercise VR game for a while to cool off, and then play some fun Beat Saber to cap it all off.
VR exercise really is extremely convenient. I live directly across from a park with a nice set of tennis courts, but: Sometimes other people are using the tennis courts! Sometimes it’s dark outside! Sometimes it’s too hot or cold! Sometimes I don’t have anyone who wants to play tennis with me! Sometimes I just don’t feel like going out and doing a bunch of exercise in public! In all those cases, it’s awesome to be able to just do a jam session of Beat Saber in my garage.
And about what games I enjoy:
I totally second Beat Saber, it’s by far my favorite. I’d also recommend the hilarious and engaging Gorn and the boxing game Creed: Rise to Glory for variety. Echo Arena, a multiplayer “Ender’s Game” style zero-gravity ultimate-frisbee game, might also make a decent exercise game although I haven’t put much time into it myself. When you start getting bored of Beat Saber, just get into downloading custom songs and it’ll be tons of fun all over again.
(When you’re wearing weighted gloves, it also turns any game where you’re often holding your hands out into a bit of a weird endurance exercise—archery games like In Death, rock-climbing simulator The Climb, or the slow-motion action game Super Hot. But although this can get tiring for your arm muscles, I’m don’t think it’s really good exercise since it doesn’t get your heart pumping.)
As someone who plays lots of both ordinary videogames and VR games, I think a big misconception among players of non-VR games is that, since VR looks so immersive and all-encompassing (literally a box strapped to your face), it would therefore be extremely addictive. In fact, VR games generally seem much less addictive than normal videogames. VR games tend to be much shorter / smaller in scope than the most popular non-VR games. And the fact that I’m moving around doing things (plus wearing a slightly-uncomfortable headset the whole time) just naturally causes me to want to switch things up and move between different activities more frequently, rather than sitting down in front of a screen where it takes a bit more mental effort to stop playing and get up to do something else.
In light of that fact, and in the name of encouraging you to buy a Quest 2 and use it mostly for exercise, here are a couple of non-exercise VR games that I figure LessWrongers might enjoy (skipping over some popular stuff that already appears on internet best-of-Quest lists). None of these games are more than about 5 hours long, per my point about VR games being mostly short-and-sweet:
A Fisherman’s Tale—a delightful, thoughtful game about recursion and symmetries.
Shadow Point—if you, like me, are in love with Braid and The Witness, Shadow Point feels a little like a VR fan-game inspired by those games’ aesthetic.
Virtual Virtual Reality—a funny, Portal-2-esque experience that explores different mechanics while touring you through a bunch of comedy skits intelligently parodying different ideas about cyberspace.
(There are also bunch of really fascinating VR games on PC, like “4D Toys” about the physics of 4-dimensional objects, “Hyperbolica” about hyperbolic geometry, and “Paper Beast” (a beautiful game about ecology and physics simulation), which unfortunately don’t run natively on Quest. Instead they require some complicated/finnicky setup to connect to a gaming PC to play.)
Rez Infinite / Tetris Effect—entrancing, relaxing, colorful games with nice music. Nice for taking a short break between workout sections.
The rule for the next round of the contest is:
The most-upvoted suggestion will become the rules for the next round of the contest, subject to the constraint that:
The winning suggestion for the next round must describe a contest that an ordinary person could reasonably enter with less than a day’s effort; eg asking for people to write and submit a google doc rather than asking contestants to create a 1000-acre wildlife preserve or implement a set of rules that is clearly paradoxical/impossible.
You are in luck; it would appear that Elizabeth has already produced some significant long-covid analysis of exactly this nature!
I follow the logic but also find myself amused by the thought that “simulate every possible unfriendly AI”, which sounds like literally the worst civilizational policy choice ever (no matter how safe we think our containment plan might be), could possibly be considered a good idea.
Eliezer’s point is well-taken, but the future might have lots of different kinds of software! This post seemed to be mostly talking about software that we’d use for brain-computer interfaces, or for uploaded simulations of human minds, not about AGI. Paul Christiano talks about exactly these kinds of software security concerns for uploaded minds here: https://www.alignmentforum.org/posts/vit9oWGj6WgXpRhce/secure-homes-for-digital-people
For starters, why aren’t we already offering the most basic version of this strategy as a workplace health benefit within the rationality / EA community? For example, on their workplace benefits page, OpenPhil says:
We offer a family forming benefit that supports employees and their partners with expenses related to family forming, such as fertility treatment, surrogacy, or adoption. This benefit is available to all eligible employees, regardless of age, sex, sexual orientation, or gender identity.
Seems a small step from there to making “we cover IVF for anyone who wants (even if your fertility is fine) + LifeView polygenic scores” into a standard part of the alignment-research-agency benefits package. Of course, LifeView only offers health scores, but they will also give you the raw genetic data. Processing this genetic data yourself, DIY style, could be made easier—maybe there could be a blog post describing how to use an open-source piece of software and where to find the latest version of EA3, and so forth.
All this might be a lot of trouble for (if you are pessimistic about PGT’s potential) a rather small benefit. We are not talking Von Neumanns here. But it might be worth creating a streamlined community infrastructure around this anyways, just in case the benefit becomes larger as our genetic techniques improve.
You might be interested in this post on the EA Forum advocating for the potential of free/open-source software as an EA cause or EA-adjacent ally movement (see also my more skeptical take in the comments). https://forum.effectivealtruism.org/posts/rfpKuHt8CoBtjykyK/how-impactful-is-free-and-open-source-software-development
I also thought this other EA Forum post was a good overview of the general idea of altruistic software development, some of the key obstacles & opportunities, etc. It’s mostly focused on near-term projects for creating software tools that can improve people’s reasoning and forecasting performance, not cybersecurity for BCIs or digital people, but you might find it helpful for giving an in-depth EA perspective on software development that sometimes contrasts and sometimes overlaps with the perspective of the open-source movement: https://forum.effectivealtruism.org/posts/2ux5xtXWmsNwJDXqb/ambitious-altruistic-software-engineering-efforts
I think there are some ways of flipping tables that offer some hope (albeit a longshot) of actually getting us into a better position to solve the problem, rather than just delaying the issue. Basically, strategies for suppressing or controlling Earth’s supply of compute, while pressing for differential tech development on things like BCIs, brain emulation, human intelligence enhancement, etc, plus (if you can really buy lots of time) searching for alternate, easier-to-align AGI paradigms, and making improvements to social technology / institutional decisionmaking (prediction markets, voting systems, etc).
I would write more about this, but I’m not sure if MIRI / LessWrong / etc want to encourage lots of public speculation about potentially divisive AGI “nonpharmaceutical interventions” like fomenting nuclear war. I think it’s an understandably sensitive area, which people would prefer to discuss privately.
Why would showing that fish “feel empathy” prove that they have inner subjective experience? It seems perfectly possible to build a totally mechanical, non-conscious system that nevertheless displays signs of empathy. Couldn’t fish just have some kind of built-in, not-necessarily-conscious instinct to protect other fish (for instance, by swimming together in a large school) in order to obtain some evolutionary benefit?
Conversely, isn’t it possible for fish to have inner subjective experience but not feel empathy? Fish are very simple creatures, while “empathy” is a complicated social emotion. Especially in a solitary creature (like a shark, or an octopus), it seems plausible that you might have a rich inner world of qualia alongside a wide variety of problem-solving / world-modeling skills, but no social instincts like jealousy, empathy, loyalty, etc. Fish-welfare advocates often cite studies that seem to show fish having an internal sense of pain vs pleasure (eg, preferring water that contains numbing medication), or that bees can have an internal sense of being optimistic/risky vs pessimistic/cautious—if you think that empathy proves the existence of qualia, why are these similar studies not good enough for you? What’s special about the social emotion of empathy?
Personally, I am more sympathetic to the David Chalmers “hard problem of consciousness” perspective, so I don’t think these studies about behaviors (whether social emotions like jealousy or more basic emotions like optimism/pessimism) can really tell us that much about qualia / inner subjective experience. I do think that fish / bees / etc probably have some kind of inner subjective experience, but I’m not sure how “strong”, or vivid, or complex, or self-aware, that experience is, so I am very uncertain about the moral status of animals. (Personally, I also happily eat fish & shrimp all the time.)
In general, I think this post is talking about consciousness / qualia / etc in a very confused way—if you think that empathy-behaviors are ironclad proof of empathy-qualia, you should also think that other (pain-related, etc) behaviors are ironclad proof of other qualia.
The fasting analogy is interesting, as is the analogy with exercise—some kinds of activities are beneficial in the long-run even when they are damaging/unpleasant in the short run. But surely these are exceptions to the general rule, right?
Besides exercise, it’s not good to repeatedly injure yourself and then have the wounds heal. (Exercise is essentially the small, specific subtype of “injury” which is actually good for the body in the long term.)
Getting sick with a cold or flu is good at building immunity to that kind of virus when it comes around a second time, but aside from immunity concerns, it would be better for your health to never become sick at all. (As with viruses, the same goes for diseases caused by parasites or bacteria.) Especially as a young child, getting badly sick can impact your development and later IQ / income / etc substantially. Getting mildly sick is probably mildly bad for those same metrics.
On the other hand, I enjoyed your post a few months ago examining whether letting kids play outside and get dirty is helpful for calibrating their immune systems and reducing allergies later in life. It seems like the “hygiene hypothesis” is less firmly established than I thought, but if true it would be an example something else like exercise and fasting where injury/stress leads to long-term benefit.
One of the reasons junk food is bad is because it has lots of quickly-absorbed sugars, which rush into your bloodstream and force your insulin/glycogen system to ramp up quickly and do a lot of work. Over the long term, putting all this stress on your body’s ability to absorb sugars is though to reduce your body’s insulin sensitivity, leading to metabolic disorders like prediabetes. So, chronic consumption of junk food is bad—but should I prefer a totally healthy “low-glycemic index” diet? Or a mostly low-glycemic index diet but I occasionally consume a blast of sugary sweets to “exercise” my insulin system? I don’t think science has given us a real answer here, but most doctors would probably recoil in horror at the idea of “exercising” one’s metabolic system by occasionally binging junk food, and I’d be inclined to agree with them.
Some kinds of psychological stress and trauma are probably beneficial in the long run (for instance, working hard on a project to meet a deadline and feeling invigorated + learning better productivity skills as a result), while other kinds are probably just bad.
Basically, it seems like there are plenty of examples on both sides, and I can’t figure out any general rule that would let me predict ahead of time which seemingly-bad behaviors/stressors are secretly good or not. The examples of exercise and fasting are helpful reminders to keep an open mind, but I don’t think they can tell us much more than that when we’re trying to figure out how to think about sleep.
(Similarly, some things about the modern world are “superstimulus”—like junk food. But others are just progress—like the fact that I can afford a healthy diet with lots of meat and vegetables if I so choose, while my agrarian ancestors got much more of their calories from samey, not-very-nutritious grains. I don’t know if comfortable beds are a superstimulus encouraging us to oversleep harmfully or just modern progress enabling us to higher-quality sleep. But I do appreciate that the “superstimulus” hypothesis is reasonable and encourages us to keep an open mind.)