Engineer working on next-gen satellite navigation at Xona Space Systems. I write about effective-altruist and longtermist topics at nukazaria.substack.com, or you can read about puzzle videogames and other things at jacksonw.xyz
Jackson Wagner
Cross-posting my long comment from the EA Forum:
I always appreciate your newsletter, and agree with your grim assessment of prediction markets’ long-suffering history. Here is what I am left wondering after reading this edition:
Okay, so the USA has mostly dropped the ball on this for forty years. But what about every other country? China seems pretty ambitious and willing to make things happen in order to secure their place on the world stage—where is the CCP-subsidized market hive-mind driving all the crucial central planning decisions? Well, maybe a prediction market doesn’t play well with wanting to exert lots of top-down control and suppress free speech. Okay, what about countries in Europe? What about Taiwan or Singapore? Nobody has yet achieved some kind of Hansonian utopia, so what is the limiting factor?
Maybe there is a strong correlation between different countries’ policy decisions caused by a ‘global elite culture’ that picks similar policies everywhere, and occasionally screws up by all deciding to ignore the potential of nuclear power, all choosing the same solutions to covid-19, etc. This is Hanson’s idea; I find it a bit conspiratorial compared to my framing in the next point. (But if it’s true, what are we to do? Perhaps try to build up EA/rationalism as a movement until we can influence global elite culture for the better? Or maybe become advisers to potential outliers from global elite culture, like the Saudi Arabian monarchy or something? Or somehow construct whole alternative institutions using stuff like crypto and charter cities, “Atlas Shrugged” style?)
Maybe it’s less about ‘elite’ culture and more about universal human biases. Perhaps prediction markets are abhorrent to normal folks insofar as they offend traditional status hierarchies (by holding people too closely to their word and showing the hypocrisy of leaders, or something), and that causes them to be rejected wherever they are tried. That might sound nutty, but personally I think a big part of why healthcare systems are so complicated and expensive is because so many reform ideas that look great on paper run into universal deeply-engrained cultural preferences—for instance, the taboo tradeoff between money & lives prevents healthcare systems from making price information too obvious or making inequalities in care quality too visible. Similarly, the human desire to show that we care causes us to overspend late in life when help would not do much good, and underspend on some cheap preventative things (like doing more to push people towards better exercise/diet/etc). If this is true for prediction markets, what do we do? Maybe it suggests that we should start building the prediction-market future starting from stock markets (via services like Kalshi), since stock markets are (grudgingly!) tolerated by human culture, rather than trying to persuade governments or corporations to adopt them (where it is too easy for a leader to veto it based on their gut opposition).
Maybe it’s misleading to frame the question this way, as “why is EVERY COUNTRY failing in the SAME WAY”, because most prediction market advocates have all been inside the USA/anglosphere, so other countries haven’t really had a fair shot at being persuaded? (In this case, maybe all that’s necessary is to fund some prediction-market advocacy groups in Taiwan, Singapore, India, Dubai, South Korea, and other diverse locations until somebody finally takes the offer! Then, once one country is doing it, that will make it easier for the innovation to spread elsewhere.)
Maybe there’s no special explanation, and every nation is just failing for its own distinct reason, just because governments aren’t that competent and reform is hard and the possibility space of failure is much larger than the small target of success. Countries make dumb decisions all the time and are constantly leaving large amounts of potential economic growth on the table, just because life is tough and it’s hard to make good decisions. Null hypothesis! In which case we just need to try harder and then our dreams might come true (even here in the USA, despite the grim history of defeats). This hypothesis gets stronger when you consider that the existing community of prediction-market advocacy is quite small and there is not much funding in it.
Maybe prediction markets are somehow not ready for prime-time, lacking some crucial feature that would speed adoption and retrospectively seem like an obvious part of the complete package? (In this case, the challenge is to figure out what feature to add or how to otherwise tweak the design to get product/market fit, and then it’s off to the races. Hanson has often identified “the difficulty of finding a customer willing to pay for info” as the most difficult piece of the puzzle. Hence the appeal of corporate prediction markets more usable, or of persuading governments to subsidize prediction markets on topics of public interest. This reasoning also lines up with your complaint about how, without a customer willing to subsidize the market, markets gravitate towards covering meaningless emotional topics like sports, where people are irrational.) Maybe we need to make prediction markets easier to use so they can be adopted more easily by corporate customers (my “gitlab of prediction markets” idea), or maybe we need to increase liquidity of markets by lowering fees and doing things like parking the invested funds in the S&P 500 so that prediction stops being a zero-sum game. Maybe we just need to figure out better and better ways around the regulations banning prediction markets. Et cetera.
Of course I would be eager to hear your thoughts on what the key limiting factor(s) might be.
In your last newsletter, you remarked, “It’s kind of interesting how $40k [given away by Astral Codex Ten Grants] feels like a significant quantity of all the funding there is for small experiments in the forecasting space. This is probably suboptimal.” What prediction-market experiments would you be most interested to see run?
Do you think that advocacy/lobbying for prediction markets (perhaps in other countries as mentioned) is a worthwhile endeavor? Or do you think that would be less effective than experimenting with different market designs?
Robin Hanson wanted to run a “Fire the CEO” conditional prediction market about companies’ stock price if the CEO did / did not resign by the end of the quarter. I guess the plan would be to initially subsidize the market yourself, then prove the worth of the idea, then run a service where companies pay you to be included among your many company-specific markets. Do you think this is a promising plan? Is money the biggest roadblock, or would it be illegal to offer this service in the USA / without the companies’ permission / etc? (Could we just run it in the UK or something?)
FTX is a huge crypto exchange, they’ve already done prediction markets on the past for presidential elections, and they’re apparently EA-aligned. Could we ask them if they’d please run other prediction markets that we think would be useful, such as a conditional prediction market about the stock market’s performance conditional on a presidential election outcome, or a prediction market about some high-minded EA-relevant topic?
There is kind of a difference between “studying forecasting to benefit EA” (which seems to describe many of your projects at Quantified Uncertainty Research Institute, such as creating software to help grantmakers estimate impact), versus “prediction markets as an EA cause area” (under the heading of “improving institutional decision-making”, progress studies, and improving “civilizational adequacy”). How do you feel about the relationship between the two, and which side do you tend to feel is the better place to focus effort in terms of impact?
This is a lot of questions, so no pressure to respond to everything—I mostly intend this as food for thought. Also, I wrote this post casually, but let me know if you think it would be good to rework as a top-level post (ie, if you think these are good questions that people should be thinking more about).
You claim that “just fight the war” is a wasteful and inefficient way to defend against invasion compared to clever strategies like taking out the enemy’s leadership or deploying a propaganda campaign to change the invading nation’s opinion about your nation’s citizens. But doesn’t most invasion-defense mostly consist of just fighting the war? If assassinations are so easy and are obviously the right thing to do, shouldn’t they happen more often? When was the last time assassinations were used to end a war anywhere? Without examples, the ideas in this post seem unmoored from any real assessment about what’s hard vs easy.
As for cultural/propaganda solutions, these all seem far too slow. Once the enemy’s tanks are rolling, the war will be decided in a matter of days or weeks—no time to go about changing the cultural attitudes of an entire population! (And how might we expect to shift their attention to domestic issues overnight, when we have to compete with the headline that their country has just declared war??) I could see some of these defensive tactics working as a way to try and prevent invasion from ever occurring in the first place (like Taiwan’s situation), or as a way to make the best of a small international incident (like how the occasional India/Pakistan flare-ups are played by both governments to score domestic political points), or that they would become relevant amid a long, drawn-out stalemate. But if you’re the victim of a fast-moving surprise invasion, no clever cultural shenanigans are going to stop the hard power streaming across your borders.
(heavy spoilers for ending of HPMOR):
HPMOR takes place in the 1990s, and importantly takes place before most people realized that the mysterious Quirrell was actually none other than the all-powerful nefarious amoral supergenius behind Lord Voldemort. Presumably, the exchange value of Quirrell points fluctuated over time—low during periods when they only seemed useful for getting favors from an eccentric Defense Professor, high as the Defense Professor became increasingly well-known for his extreme competence and mysterious proximity to important events at Hogwarts, then reaching an astronomically high value when it seemed that Quirrell was on the cusp of achieving total domination over all of human civilization forever, then finally crashing to around the current value after Quirrell was defeated and imprisoned indefinitely until such time as he might be safely healed.
Economists debate whether the current market value of Quirrell points derives more from the possibility of receiving favors from Quirrell in future scenarios where he is revived (whether during a utopian far-future or a disastrous near-term return of Lord Voldemort), or merely from the fact that since only around 21 thousand Quirrell points were minted before its anonymous founder disappeared, Quirrell points form a sound monentary basis for a noninflationary store of value.
None other than Peter Thiel wrote a huge essay about investing while under anthropic shadow, and I wrote a post analyzing said essay! It is interesting, although pretty abstract in a way that probably makes it more relevant to organizations like OpenPhilanthropy than to most private individuals. Some quotes from Thiel’s essay:
Apocalyptic thinking appears to have no place in the world of money. For if the doomsday predictions are fulfilled and the world does come to an end, then all the money in the world — even if it be in the form of gold coins or pieces of silver, stored in a locked chest in the most remote corner of the planet — would prove of no value, because there would be nothing left to buy or sell. Apocalyptic investors will miss great opportunities if there is no apocalypse, but ultimately they will end up with nothing when the apocalypse arrives. Heads or tails, they lose. …A mutual fund manager might not benefit from reflecting about the danger of thermonuclear war, since in that future world there would be no mutual funds and no mutual fund managers left. Because it is not profitable to think about one ’s death, it is more useful to act as though one will live forever.
Since it is not profitable to contemplate the end of civilization, this distorts market prices. Instead of telling us about the objective probabilities of how things will play out, prices are based on probabilities adjusted by the anthropic logic of ignoring doomed scenarios:
Let us assume that, in the event of [the project of civilization being broadly successful], a given business would be worth $ 100/share, but that there is only an intermediate chance (say 1:10) of that successful outcome. The other case is too terrible to consider. Theoretically, the share should be worth $ 10, but in every world where investors survive, it will be worth $100. Would it make sense to pay more than $10, and indeed any price up to $100? Whether in hope or desperation, the perceived lack of alternatives may push valuations to much greater extremes than in nonapocalyptic times.
See my post for more.
I totally second this. A couple facts about my own routine:
I’ve been using a Quest 2 for regular (2x-3x weekly) brief exercise sessions for about a year. In combination with occasional (0.5x-1x weekly) traditional strength training routine and some jogging around the park, this is the most fun I’ve ever had exercising and the most consistent that I’ve ever been about it, although I still wish I was doing more.
I use this basic pair of weighted gloves when I play VR, which makes games like Beat Saber much more of a workout! I was initially put off by people talking about how it might be bad to use wrist weights because they could cause injury, but I think this is referring to much weights much heavier than 1lb per hand. In my experience, I haven’t come close to anything that feels dangerous—if you wouldn’t fear for your wrists while dancing and holding a pair of largeish apples, you shouldn’t have anything to worry about from using relatively light weighted gloves. I also got some of this foam-exercise-mat stuff to create a defined space in my garage for exercise and VR, which has been nice.
I use VR games not just as a workout in themselves, but also as a reward to get me to do more ordinary types of exercise. ie, I’ll promise myself “okay—get through this 1-hour r/bodyweightfitness routine, and then you can chill with an interesting non-exercise VR game for a while to cool off, and then play some fun Beat Saber to cap it all off.
VR exercise really is extremely convenient. I live directly across from a park with a nice set of tennis courts, but: Sometimes other people are using the tennis courts! Sometimes it’s dark outside! Sometimes it’s too hot or cold! Sometimes I don’t have anyone who wants to play tennis with me! Sometimes I just don’t feel like going out and doing a bunch of exercise in public! In all those cases, it’s awesome to be able to just do a jam session of Beat Saber in my garage.
And about what games I enjoy:
I totally second Beat Saber, it’s by far my favorite. I’d also recommend the hilarious and engaging Gorn and the boxing game Creed: Rise to Glory for variety. Echo Arena, a multiplayer “Ender’s Game” style zero-gravity ultimate-frisbee game, might also make a decent exercise game although I haven’t put much time into it myself. When you start getting bored of Beat Saber, just get into downloading custom songs and it’ll be tons of fun all over again.
(When you’re wearing weighted gloves, it also turns any game where you’re often holding your hands out into a bit of a weird endurance exercise—archery games like In Death, rock-climbing simulator The Climb, or the slow-motion action game Super Hot. But although this can get tiring for your arm muscles, I’m don’t think it’s really good exercise since it doesn’t get your heart pumping.)
As someone who plays lots of both ordinary videogames and VR games, I think a big misconception among players of non-VR games is that, since VR looks so immersive and all-encompassing (literally a box strapped to your face), it would therefore be extremely addictive. In fact, VR games generally seem much less addictive than normal videogames. VR games tend to be much shorter / smaller in scope than the most popular non-VR games. And the fact that I’m moving around doing things (plus wearing a slightly-uncomfortable headset the whole time) just naturally causes me to want to switch things up and move between different activities more frequently, rather than sitting down in front of a screen where it takes a bit more mental effort to stop playing and get up to do something else.
In light of that fact, and in the name of encouraging you to buy a Quest 2 and use it mostly for exercise, here are a couple of non-exercise VR games that I figure LessWrongers might enjoy (skipping over some popular stuff that already appears on internet best-of-Quest lists). None of these games are more than about 5 hours long, per my point about VR games being mostly short-and-sweet:
A Fisherman’s Tale—a delightful, thoughtful game about recursion and symmetries.
Shadow Point—if you, like me, are in love with Braid and The Witness, Shadow Point feels a little like a VR fan-game inspired by those games’ aesthetic.
Virtual Virtual Reality—a funny, Portal-2-esque experience that explores different mechanics while touring you through a bunch of comedy skits intelligently parodying different ideas about cyberspace.
(There are also bunch of really fascinating VR games on PC, like “4D Toys” about the physics of 4-dimensional objects, “Hyperbolica” about hyperbolic geometry, and “Paper Beast” (a beautiful game about ecology and physics simulation), which unfortunately don’t run natively on Quest. Instead they require some complicated/finnicky setup to connect to a gaming PC to play.)
Rez Infinite / Tetris Effect—entrancing, relaxing, colorful games with nice music. Nice for taking a short break between workout sections.
The rule for the next round of the contest is:
The most-upvoted suggestion will become the rules for the next round of the contest, subject to the constraint that:
The winning suggestion for the next round must describe a contest that an ordinary person could reasonably enter with less than a day’s effort; eg asking for people to write and submit a google doc rather than asking contestants to create a 1000-acre wildlife preserve or implement a set of rules that is clearly paradoxical/impossible.
You are in luck; it would appear that Elizabeth has already produced some significant long-covid analysis of exactly this nature!
I follow the logic but also find myself amused by the thought that “simulate every possible unfriendly AI”, which sounds like literally the worst civilizational policy choice ever (no matter how safe we think our containment plan might be), could possibly be considered a good idea.
Eliezer’s point is well-taken, but the future might have lots of different kinds of software! This post seemed to be mostly talking about software that we’d use for brain-computer interfaces, or for uploaded simulations of human minds, not about AGI. Paul Christiano talks about exactly these kinds of software security concerns for uploaded minds here: https://www.alignmentforum.org/posts/vit9oWGj6WgXpRhce/secure-homes-for-digital-people
For starters, why aren’t we already offering the most basic version of this strategy as a workplace health benefit within the rationality / EA community? For example, on their workplace benefits page, OpenPhil says:
We offer a family forming benefit that supports employees and their partners with expenses related to family forming, such as fertility treatment, surrogacy, or adoption. This benefit is available to all eligible employees, regardless of age, sex, sexual orientation, or gender identity.
Seems a small step from there to making “we cover IVF for anyone who wants (even if your fertility is fine) + LifeView polygenic scores” into a standard part of the alignment-research-agency benefits package. Of course, LifeView only offers health scores, but they will also give you the raw genetic data. Processing this genetic data yourself, DIY style, could be made easier—maybe there could be a blog post describing how to use an open-source piece of software and where to find the latest version of EA3, and so forth.
All this might be a lot of trouble for (if you are pessimistic about PGT’s potential) a rather small benefit. We are not talking Von Neumanns here. But it might be worth creating a streamlined community infrastructure around this anyways, just in case the benefit becomes larger as our genetic techniques improve.
You might be interested in this post on the EA Forum advocating for the potential of free/open-source software as an EA cause or EA-adjacent ally movement (see also my more skeptical take in the comments). https://forum.effectivealtruism.org/posts/rfpKuHt8CoBtjykyK/how-impactful-is-free-and-open-source-software-development
I also thought this other EA Forum post was a good overview of the general idea of altruistic software development, some of the key obstacles & opportunities, etc. It’s mostly focused on near-term projects for creating software tools that can improve people’s reasoning and forecasting performance, not cybersecurity for BCIs or digital people, but you might find it helpful for giving an in-depth EA perspective on software development that sometimes contrasts and sometimes overlaps with the perspective of the open-source movement: https://forum.effectivealtruism.org/posts/2ux5xtXWmsNwJDXqb/ambitious-altruistic-software-engineering-efforts
I think there are some ways of flipping tables that offer some hope (albeit a longshot) of actually getting us into a better position to solve the problem, rather than just delaying the issue. Basically, strategies for suppressing or controlling Earth’s supply of compute, while pressing for differential tech development on things like BCIs, brain emulation, human intelligence enhancement, etc, plus (if you can really buy lots of time) searching for alternate, easier-to-align AGI paradigms, and making improvements to social technology / institutional decisionmaking (prediction markets, voting systems, etc).
I would write more about this, but I’m not sure if MIRI / LessWrong / etc want to encourage lots of public speculation about potentially divisive AGI “nonpharmaceutical interventions” like fomenting nuclear war. I think it’s an understandably sensitive area, which people would prefer to discuss privately.
Why would showing that fish “feel empathy” prove that they have inner subjective experience? It seems perfectly possible to build a totally mechanical, non-conscious system that nevertheless displays signs of empathy. Couldn’t fish just have some kind of built-in, not-necessarily-conscious instinct to protect other fish (for instance, by swimming together in a large school) in order to obtain some evolutionary benefit?
Conversely, isn’t it possible for fish to have inner subjective experience but not feel empathy? Fish are very simple creatures, while “empathy” is a complicated social emotion. Especially in a solitary creature (like a shark, or an octopus), it seems plausible that you might have a rich inner world of qualia alongside a wide variety of problem-solving / world-modeling skills, but no social instincts like jealousy, empathy, loyalty, etc. Fish-welfare advocates often cite studies that seem to show fish having an internal sense of pain vs pleasure (eg, preferring water that contains numbing medication), or that bees can have an internal sense of being optimistic/risky vs pessimistic/cautious—if you think that empathy proves the existence of qualia, why are these similar studies not good enough for you? What’s special about the social emotion of empathy?
Personally, I am more sympathetic to the David Chalmers “hard problem of consciousness” perspective, so I don’t think these studies about behaviors (whether social emotions like jealousy or more basic emotions like optimism/pessimism) can really tell us that much about qualia / inner subjective experience. I do think that fish / bees / etc probably have some kind of inner subjective experience, but I’m not sure how “strong”, or vivid, or complex, or self-aware, that experience is, so I am very uncertain about the moral status of animals. (Personally, I also happily eat fish & shrimp all the time.)
In general, I think this post is talking about consciousness / qualia / etc in a very confused way—if you think that empathy-behaviors are ironclad proof of empathy-qualia, you should also think that other (pain-related, etc) behaviors are ironclad proof of other qualia.
“We have a confusing situation here.” -- Indeed, I think this post is a little confused, mixing up a few very different questions:
Is it a good idea to literally punish & reward people based on their level of intelligence, in the hopes that they will spontaneously make themselves more intelligent?
Usually no, as your example of Frank illustrates. Because your own intelligence level is a hard thing to change. Punishing people for being born dumb is thus a bit like punishing people for being born short—pointless to try and get people to change something that they can’t change.
Is it a good idea to reward intellectual achievements and hard work on important problems, while punishing laziness / wasted time / underperformance? And similarly, to reward open-minded thoughtfulness while punishing “lazy thinking” and knee-jerk responses.
Yes, because this is a way of motivating people about something they can change—what they choose to work on, and how hard they work, and etc. It’s a good thing that we have Nobel Prizes to reward people who discover breakthrough cancer medicines, but not for people who discover breakthrough strategies in esports videogames, or for that matter Nobel Prizes for people who just sit around watching TV. For instance, it would be a good idea to praise Frank when he does a good job at work, or if he shows a bit of openness towards the idea of going to the doctor.
Is it a good idea to effectively “reward” & “punish” people on a societal level, by trying to have a meritocratic society where we find the smartest (and hardest-working, and most prosocial, and otherwise virtuous) people to run important institutions, while dumb people get less well-paying, less-impactful jobs?
Yes, because a society/corporation/government/etc run by effective, virtuous people will work more smoothly and create a better life for everyone. For instance, I would rather have you be my financial advisor, than have your dog be my financial advisor!
Is intelligence good for happiness on an individual level, or is it better for your own sake to be dumb?
Opinions differ on this; personally I think that intelligence is very good for personal happiness and life-satisfaction and living a meaningful life. Here I will quote from another comment I recently made: “You could probably find some narrowly-defined type of happiness which is anticorrelated with intelligence. But a lot of the meaning and happiness in my life seem like they would get better with more intelligence. Like my ability to understand my place in the world and live an independent life, planning my career/relationships/etc with lots of personal agency. Or my ability to appreciate the texture/experience of being alive—noticing sensations, taking time to “smell the roses”, and making meditative/spiritual/introspective progress of understanding my own mind. My ability to overcome emotional difficulties/setbacks by ‘working through them’ and communicating well with the person I might be angry at. My material quality of life, enabled by my high-income job, which I couldn’t hold down if I wasn’t reasonably smart. My ability to appreciate art on a deep level (see my lecture series about the videogame “The Witness”, an intellectual pursuit which brings me great joy). And so forth.”
(Cross-posting my comment from the EA Forum):
Considering how awesome your video on Prediction Markets is, I think it could be a great idea to make videos about some other institutional innovation ideas that are popular in the rationalist world—charter cities / network states, alternate voting systems like approval voting and liquid democracy, and so forth. (If you want to take things in an even more political direction, you could produce animated versions of the Bryan Caplan arguments for open borders or YIMBYism.)
For some more traditionally rationalist / EA media ideas, here are two of my comments on some previous threads about the idea EA documentaries.
Craziest idea: make a video about HPMOR—either a movie-trailer-style animation, or a longer more traditional youtube-y style summary of the early parts of the story, and hope that this works as a short hook to get more people reading the whole thing? Thus leveraging the amount of rationalist content you imbue per unit of animation effort. (I feel like the idea of video-as-hook would work well for HPMOR and other works of fiction, versus with other EA / rationalist content it is better to stick to “video as summary of the key message”.) Idk about the copyright issues here though.
On the subject of fiction, how about illustrating some much-shorter-than-HPMOR stories from the (allegedly slim) pantheon of great EA & rationalist fiction works?Fable of The Dragon Tyrant has already been done—but it got 9M views!!
The metaphor in “500 Million, But Not A Single One More” is very similar to “The Dragon Tyrant”, and it would probably work well with RationalAnimation’s style of illustrating ideas using monsters and leviathans (like in your “Transparent Tragedies” episode).
“The Drowning Child and the Expanding Circle” isn’t a fiction story that could be transcribed word-for-word, rather it’s a thought experiment. But it would make for a great hook for a video that goes on to explain basic EA concepts.
Similarly, just illustrating the “paperclips” AI thought-experiment (detailed and realistic rendition here, shorter and less-realistic versions found in many places including the clicker game “universal paperclips”, which became quite popular) could be very valuable, and I feel like it would fit well with the tone of RationalAnimations’ existing videos about aliens, apocalypses, and other big-picture high-drama topics.
I feel like there are a LOT of different angles that you could take when making videos about superintelligence / AI. But this comment is already long so I will not try to enumerate them here.
Since your audience loves Robin Hanson content so much, give them a summary of Age of Em? Although this is not a super-relevant subject matter.
If the copyright issues around HPMOR are navigable, consider adapting the EA short-story “A Common Sense Guide to Doing the Most Good”, which is like the rational-fiction version of this famous SMBC comic where superman turns a crank all day to generate power—in real life, superman could do good much more effectively than turning the crank!
Or how about a story where you use the vivid, apocalyptic details of the Toba Supervolcanic Eruption as a way to both inform viewers about the real history of humanity’s near-extinction 75,000 years ago, AND as a metaphor about the importance of longtermism and humanity’s vast potential? Again, I think this would fit well with the themes of other videos by RationalAnimati—wait just a second! This isn’t an all-time rationalist classic! This is just shameless self-promotion!!
As for why I think these are good ideas / what you should be optimizing for:
There is a natural tradeoff between making videos that draw in lots of new viewers (for instance, videos about aliens!) vs making videos that aren’t as viral but communicate more of the information that you truly want to impart (for instance about futarchy or longtermism). So you want to have the channel strike a balance between those two things, including by alternating between videos that are more viral-oriented versus more education-oriented.
For education-oriented videos, I’d be thrilled if you made some more videos about institutional innovations, but that’s just my personal hobbyhorse because I think it’s underrated within EA. The more obvious direction to go for education videos would be to basically just adapt the 80,000 Hours content into a series of videos. (Some of these could have viral potential, of course. Imagine a video about how you can do more good for the world as [counterintuitive career path like AI safety programmer or etc] than as a doctor—make sure to mention that fact about how the number of US doctors is essentially capped by protectionist regulations! That would seem like a very controversial take to most normal people, IMO.) Anyways, personally I think the goal of the educational videos should be communicating the core EA / rationalist worldview and thinking style. (Rather than, say, CFAR-style productivity tips, or object-level education about detailed EA issues, or random mind-blowing ideas about quantum mechanics and the universe.) I could talk about this in more detail if you are interested.
For the virally-oriented videos, it’s presumably more about just figuring out what’s going to be a big hit. Hence my thought that it might be good to recycle the greatest hits of the EA/rationalist movement, especially catchy fiction which might adapt better than abstract ideas. Although I certainly don’t know anything about growing a youtube channel to 100K subscribers, so all of my ideas about the viral side of things should be taken with a grain of salt!
As I wrote in a commentary on Gwern’s “The Narrowing Circle”, respect for ancestors is probably justified in some sort of acausal-negotiation sense, even if (as commenters Richard Kennaway and Dagon feel) we don’t actually care about their values:
A drop in respect for ancestors might also directly cause a drop in concern for descendants—it might be logical to disregard the lives of future generations if we assume that they (just like us) will ignore the wishes of their ancestors!
Consider: it’s certainly important that we somewhat respect the financial wishes of the dead (trusts, foundations, inheritance, etc) rather than (as might seem logical at first glance) confiscating all their assets upon death. This is because being able to pass on wealth and create a durable legacy (like the Rockefeller Foundation or etc) is part of what motivates people to earn and invest for the future in the first place.
By a similar logic, it might be important that we should also commit to respecting the cultural wishes of the dead, in order to somehow motivate people to take a more long-term outlook. As an off-the-cuff example, maybe a culture that emphasized respect for ancestors would be higher-fertility, since parents would know that their children would be more likely to carry forward their values than in today’s no-respect-for-past-generations culture.
- 5 May 2022 22:06 UTC; 6 points) 's comment on What We Owe the Past by (
(this comment is kind of a “i didn’t have time to write you a short letter so I wrote you a long one” situation)
re: Infowar between great powers—the view that China+Russia+USA invest a lot of efforts into infowar, but mostly “defensively” / mostly trying to shape domestic opinion, makes sense. (After all, it must be easier to control the domestic media/information lansdscape!) I would tend to expect that doing domestically-focused infowar stuff at a massive scale would be harder for the USA to pull off (wouldn’t it be leaked? wouldn’t it be illegal somehow, or at least something that public opinion would consider a huge scandal?), but on the other hand I’d expect the USA to have superior infowar technology (subtler, more effective, etc). And logically it might also be harder to percieve effects of USA infowar techniques, since I live in the USA, immersed in its culture.Still, my overall view is that, although the great powers certainly expend substantial effort trying to shape culture, and have some success, they don’t appear to have any next-gen technology qualitatively different and superior to the rhetorical techniques deployed by ordinary successful politicians like Trump, social movements like EA or wokeism, advertising / PR agencies, media companies like the New York Times, etc. (In the way that, eg, engineering marvels like the SR-72 Blackbird were generations ahead of competitors’ capabilities.) So I think the overall cultural landscape is mostly anarchic—lots of different powers are trying to exert their own influence and none of them can really control or predict cultural changes in detail.
re: Social media companies’ RL algorithms are powerful but also “they probably couldn’t prevent algorithms from doing this if they tried due to goodharts law”. -- Yeah, I guess my take on this is that the overt attempts at propaganda (aimed at placating the NYT) seem very weak and clumsy. Meanwhile the underlying RL techniques seem potentially powerful, but poorly understood or not very steerable, since social media companies seem to be mostly optimizing for engagement (and not even always succeeding at that; here we are talking on LessWrong instead of tweeting / tiktoking), rather than deploying clever infowar superweapons. If they have such power, why couldn’t left-leaning sillicon valley prevent the election of Trump using subtle social-media-RL trickery?
(Although I admit that the reaction to the 2016 election could certainly be interpreted as sillicon valley suddenly realizing, “Holy shit, we should definitely try to develop social media infowar superweapons so we can maybe prevent this NEXT TIME.” But then the 2020 election was very close—not what I’d have expected if info-superweapons were working well!)
With Twitter in particular, we’ve had such a transparent look at its operations during the handover to Elon Musk, and it just seems like both sides of that transaction have been pretty amateurish and lacked any kind of deep understanding of how to influence culture. The whole fight seems to have been about where to tug one giant lever called “how harshly do we moderate the tweets of leftists vs rightists”. This lever is indeed influential on twitter culture, and thus culture generally—but the level of sophistication here just seems pathetic.Tiktok is maybe the one case where I’d be sympathetic to the idea that maybe a lot of what appears to be random insane trends/beliefs fueled by SGD algorithms and internet social dynamics, is actually the result of fairly fine-grained cultural influence by Chinese interests. I don’t think Tiktok is very world-changing right now (as we’d expect, it’s targeting the craziest and lowest-IQ people first), but it’s at least kinda world-changing, and maybe it’s the first warning sign of what will soon be a much bigger threat? (I don’t know much about the details of Tiktok the company, or the culture of its users, so it’s hard for me to judge how much fine-grained control China might or might not be exerting.)
Unrelated—I love the kind of sci-fi concept of “people panic but eventually go back to using social media and then they feel fine (SGD does this automatically in order to retain users)”. But of course I think that the vast majority of users are in the “aren’t panicking” / never-think-about-this-at-all category, and there are so few people in the “panic” category (panic specifically over subtle persuasion manipulation tech that isn’t just trying to maximize engagement but instead achieve some specific ideological outcome, I mean) that there would be no impact on the social-media algorithms. I think it is plausible that other effects like “try not to look SO clickbaity that users recognize the addictiveness and leave” do probably show up in algorithms via SGD.
More random thoughts about infowar campaigns that the USA might historically have wanted to infowar about:Anti-communism during the cold war, maybe continuing to a kind of generic pro-corporate / pro-growth attitude these days. (But lots of people were pro-communist back in the day, and remain anti-corporate/anti-growth today! And even the republican party is less and less pro-business… their basic model isn’t to mind-control everyone into becoming fiscal conservatives, but instead to gain power by exploiting the popularity of social conservativism and then use power to implement fiscal conservativism.)
Maybe I am taking a too-narrow view of infowar as “the ability to change peoples’ minds on individual issues”, when actually I should be considering strategies like “get people hyped up about social issues in order to gain power that you can use for economic issues” as a successful example of infowar? But even if I consider this infowar, then it reinforces my point that the most advanced stuff today all seems to be variations on normal smart political strategy and messaging, not some kind of brand-new AI-powered superweapon for changing people’s minds (or redirecting their focus or whatever) in a radically new way.
Since WW2, and maybe continuing to today, the West has tried to ideologically immunize itself against Nazi-ism. This includes a lot of trying to teach people to reject charismatic dictators, to embrace counterintuitive elements of liberalism like tolerance/diversity, and even to deny inconvenient facts like racial group differences for the sake of social harmony. In some ways this has gone so well that we’re getting problems from going too far in this direction (wokism), but in other ways it can often feel like liberalism is hanging on by a thread and people are still super-eager to embrace charismatic dictators, incite racial conflict, etc.
“Human brains are extremely predisposed to being hacked, governments would totally do this, and the AI safety community is unusually likely to be targeted.”
—yup, fully agree that the AI safety community faces a lot of peril navigating the whims of culture and trying to win battles in a bunch of diverse high-stakes environments (influencing superpower governments, huge corporations, etc) where they are up against a variety of elite actors with some very strong motivations. And that there is peril both in the difficulty of navigating the “conventional” human-persuasion-transformed social landscape of today’s world (already super-complex and difficult) and the potentially AI-persuasion-transformed world of tomorrow. I would note though, that these battles will (mostly?) play out in pretty elite spaces, wheras I’d expect the power of AI information superweapons to have the most powerful impact on the mass public. So, I’d expect to have at least some warning in the form of seeing the world go crazy (in a way that seems different from and greater than today’s anarchic internet-social-dynamics-driven craziness), before I myself went crazy. (Unless there is an AI-infowar-superweapon-specific hard-takeoff where we suddenly get very powerful persuasion tech but still don’t get the full ASI singularity??)
re: Dath Ilan—this really deserves a whole separate comment, but basically I am also a big fan of the concept of Dath Ilan, and I would love to hear your thoughts on how you would go about trying to “build Dath Ilan” IRL.What should an individual person, acting mostly alone, do to try and promote a more Dath-Ilani future? Try to practice & spread Lesswrong-style individual-level rationality, maybe (obviously Yudkowsky did this with Lesswrong and other efforts). Try to spread specific knowledge about the way society works and thereby build energy for / awareness of ways that society could be improved (inadequate equilibria kinda tries to do this? seems like there could be many approaches here). Personally I am also always eager to talk to people about specific institutional / political tweaks that could lead to a better, more Dath-Ilani world: georgism, approval voting, prediction markets, charter cities, etc. Of those, some would seem to build on themselves while others wouldn’t—what ideas seem like the optimal, highest-impact things to work on? (If the USA adopted georgist land-value taxes, we’d have better land-use policy and faster economic growth but culture/politics wouldn’t hugely change in a broadly Dath-Ilani direction; meanwhile prediction markets or new ways of voting might have snowballing effects where you get the direct improvement but also you make culture more rational & cooperative over time.)
What should a group of people ideally do? (Like, say, an EA-adjacent silicon valley billionaire funding a significant minority of the EA/rationalist movement to work on this problem together in a coordinated way.) My head immediately jumps to “obviously they should build a rationalist charter city”:
The city doesn’t need truly nation-level sovereign autonomy, the goal would just be to coordinate enough people to move somewhere together a la the Free State Project, gaining enough influence over local government to be able to run our own policy experiments with things like prediction markets, georgism, etc. (Unfortunately some things, like medical research, are federally regulated, but I think you could do a lot with just local government powers + creating a critical mass of rationalist culture.)
Instead of moving to a random small town and trying to take over, it might be helpful to choose some existing new-city project to partner with—like California Forever, Telosa, Prospera, whatever Zuzalu or Praxis turn into, or other charter cities that have amenable ideologies/goals. (This would also be very helpful if you don’t have enough people or money to create a reasonably-sized town all by yourself!)
The goal would be twofold: first, run a bunch of policy experiments and try to create Dath-Ilan-style institutions (where legal under federal law if you’re still in the USA, etc). And second, try to create a critical mass of rationalist / Dath Ilani culture that can grow and eventually influence… idk, lots of people, including eventually the leaders of other governments like Singapore or the UK or whatever. Although it’s up for debate whether “everyone move to a brand-new city somewhere else” is really a better plan for cultural influence than “everyone move to the bay area”, which has been pretty successful at influencing culture in a rationalist direction IMO! (Maybe the rationalist charter city should therefore be in Europe or at least on the East Coast or something, so that we mostly draw rationalists from areas other than the Bay Area. Or maybe this is an argument for really preferring California Forever as an ally, over and above any other new-city project, since that’s still in the Bay Area. Or for just trying to take over Bay Area government somehow.)
...but maybe a rationalist charter city is not the only or best way that a coordinated group of people could try to build Dath Ilan?
This video is widely believed to be a CGI fake.
Incurring debt for negative votes is a hilarious image: “Fool! Your muddled, meandering post has damaged our community’s norm of high-quality discussion and polluted the precious epistemic commons of the LessWrong front page—now you must PAY for your transgression!!!”
The fasting analogy is interesting, as is the analogy with exercise—some kinds of activities are beneficial in the long-run even when they are damaging/unpleasant in the short run. But surely these are exceptions to the general rule, right?
Besides exercise, it’s not good to repeatedly injure yourself and then have the wounds heal. (Exercise is essentially the small, specific subtype of “injury” which is actually good for the body in the long term.)
Getting sick with a cold or flu is good at building immunity to that kind of virus when it comes around a second time, but aside from immunity concerns, it would be better for your health to never become sick at all. (As with viruses, the same goes for diseases caused by parasites or bacteria.) Especially as a young child, getting badly sick can impact your development and later IQ / income / etc substantially. Getting mildly sick is probably mildly bad for those same metrics.
On the other hand, I enjoyed your post a few months ago examining whether letting kids play outside and get dirty is helpful for calibrating their immune systems and reducing allergies later in life. It seems like the “hygiene hypothesis” is less firmly established than I thought, but if true it would be an example something else like exercise and fasting where injury/stress leads to long-term benefit.
One of the reasons junk food is bad is because it has lots of quickly-absorbed sugars, which rush into your bloodstream and force your insulin/glycogen system to ramp up quickly and do a lot of work. Over the long term, putting all this stress on your body’s ability to absorb sugars is though to reduce your body’s insulin sensitivity, leading to metabolic disorders like prediabetes. So, chronic consumption of junk food is bad—but should I prefer a totally healthy “low-glycemic index” diet? Or a mostly low-glycemic index diet but I occasionally consume a blast of sugary sweets to “exercise” my insulin system? I don’t think science has given us a real answer here, but most doctors would probably recoil in horror at the idea of “exercising” one’s metabolic system by occasionally binging junk food, and I’d be inclined to agree with them.
Some kinds of psychological stress and trauma are probably beneficial in the long run (for instance, working hard on a project to meet a deadline and feeling invigorated + learning better productivity skills as a result), while other kinds are probably just bad.
Basically, it seems like there are plenty of examples on both sides, and I can’t figure out any general rule that would let me predict ahead of time which seemingly-bad behaviors/stressors are secretly good or not. The examples of exercise and fasting are helpful reminders to keep an open mind, but I don’t think they can tell us much more than that when we’re trying to figure out how to think about sleep.
(Similarly, some things about the modern world are “superstimulus”—like junk food. But others are just progress—like the fact that I can afford a healthy diet with lots of meat and vegetables if I so choose, while my agrarian ancestors got much more of their calories from samey, not-very-nutritious grains. I don’t know if comfortable beds are a superstimulus encouraging us to oversleep harmfully or just modern progress enabling us to higher-quality sleep. But I do appreciate that the “superstimulus” hypothesis is reasonable and encourages us to keep an open mind.)