World War II ended in 1945—right before the green revolution. There has been no direct war between major world powers between then and now. I don’t think this is a coincidence.
Although I agree with the overarching point, I don’t think this particular line of argument holds up. For instance, the invention of the Haber-Bosch process did nothing to stop the two bloodiest wars in human history from occurring almost immediately afterwards.
I would also add hardware limitations. Moore’s Law is dead on half the metrics, and we’re approaching the Landauer limit. Even if the scaling laws hold, we might simply be incapable of keep up with the computational demand.
If you’re selling GPUs, it is good for your bottom line to predict a glorious rise of AI in the future.If you’re an AI company, it is profitable to say that your AI is already very smart and general. If you’re running an AI-risk non-profit, predicting the inevitable emergence of AGI could attract donors.If you’re a ML researcher, you can do some virtue signaling by comparing AGI with an overpopulation on Mars.If you’re an ethics professor, you can get funding for your highly valuable study of the trolley problem in self-driving cars.If you’re a journalist / writer / movie maker, the whole debacle helps you sell more clicks / books / views.
If you’re selling GPUs, it is good for your bottom line to predict a glorious rise of AI in the future.
If you’re an AI company, it is profitable to say that your AI is already very smart and general.
If you’re running an AI-risk non-profit, predicting the inevitable emergence of AGI could attract donors.
If you’re a ML researcher, you can do some virtue signaling by comparing AGI with an overpopulation on Mars.
If you’re an ethics professor, you can get funding for your highly valuable study of the trolley problem in self-driving cars.
If you’re a journalist / writer / movie maker, the whole debacle helps you sell more clicks / books / views.
Cynical, but true.
I agree, although I doubt the brain algorithm will end up at the core of AGI.
Upon their ashes, Japan was rebuilt into a nation-state so nonaggressive the Japanese military is literally called the “Japanese Self-Defense Force” (自衛隊). This isn’t a euphemism like the United States’ “Department of Defense”. It really is a defensive force. Article 9 of the Japanese constitution prohibits Japan from establishing a military or solving international conflicts through violence.
A nation-state so nonaggressive that it has aircraft carriers and Plutonium stockpiles?
Are there theoretical limits to whether this can be done?
According to this graph, real GDP has grown by roughly a factor of 6 since 1960. That seems… way too low, intuitively. Consider:I’m typing this post on my laptop (which conveniently has a backspace button and everything I type is backed up halfway around the world and I can even insert images trivially)...while listening to spotify…through my noise-canceling earbuds…and there’s a smartphone on my desk which can give me detailed road maps and directions anywhere in the US and even most of the world, plus make phone calls…and oh-by-the-way I have an internet connection.
According to this graph, real GDP has grown by roughly a factor of 6 since 1960. That seems… way too low, intuitively. Consider:
I’m typing this post on my laptop (which conveniently has a backspace button and everything I type is backed up halfway around the world and I can even insert images trivially)...
while listening to spotify…
through my noise-canceling earbuds…
and there’s a smartphone on my desk which can give me detailed road maps and directions anywhere in the US and even most of the world, plus make phone calls…
and oh-by-the-way I have an internet connection.
Forgive my language, but this paragraph looks to me like an example of tech people being a bit too full of themselves sometimes. The IT-sector is clearly a cherry-picked example and cannot be extrapolated to the rest of the economy. It’s also not a good proxy for utilons; a million-fold increase in transistor abundance does not correspond to a million-fold more value for society, marginal returns yada yada. One could have picked even more extreme examples, like the triple product in nuclear fusion that has improved even faster than Moore’s law yet has generated approximately zero value for society thus far. On the other hand, the average life expectancy in the US has only improved by 13% since 1960 (and has begun to drop recently), arguably a measure much closer to the wellbeing of people.
1960 real GDP (and 1970 real GDP, and 1980 real GDP, etc) calculated at recent prices is dominated by the things which are expensive today—like real estate, for instance. Things which are cheap today are ignored in hindsight, even if they were a very big deal at the time.In other words: real GDP growth mostly tracks production of goods which aren’t revolutionized. Goods whose prices drop dramatically are downweighted to near-zero, in hindsight.
1960 real GDP (and 1970 real GDP, and 1980 real GDP, etc) calculated at recent prices is dominated by the things which are expensive today—like real estate, for instance. Things which are cheap today are ignored in hindsight, even if they were a very big deal at the time.
In other words: real GDP growth mostly tracks production of goods which aren’t revolutionized. Goods whose prices drop dramatically are downweighted to near-zero, in hindsight.
And I argue that that’s how it should be—a transistor is on average performing much more important tasks in 1960, like planning trajectories for moon missions or running banking systems, than in 2021, like allowing people to watch TikTok videos or play games in HD. On the other hand, people still need houses to live in no matter how fancy their smartphones become. For average people, real estate is genuinely a bigger deal now than even a massive increase in their phone’s camera resolution.
In fact, the real GDP graph at the beginning of this post uses a different method—the BEA (which calculates the “official” US GDP and produced the numbers in that graph) switched from fixed prices to “chaining” in 1996.
I think this is actually an ingenious way of putting productivity figures into a historical context and thereby allowing us to track progress at all. There are ways it can break, as I will discuss later, but it’s still far superior than pointing to Moore’s Law and saying “Did you know you’re actually trillions of times richer than the average person in 1960?”.
Now, I think the smoothness of real GDP growth tells us basically-nothing about the smoothness of AI takeoff. Even after a hypothetical massive jump in AI, real GDP would still look smooth, because it would be calculated based on post-jump prices, and it seems pretty likely that there will be something which isn’t revolutionized by AI.
I agree it’s silly to use GDP growth as a measure in AI takeoff scenarios, kind of like asking how big of an impact a civilization-ending meteor would have on the stock market (big, approx. 150 km in diameter). I don’t expect our current concepts of private property, ownership or indeed money as a coordination mechanism to survive AI takeoff.
But that’s just AI being AI.
Let’s take a less extreme example: Suppose in the near future, a pill was invented that prolonged your healthspan perfectly by 30 years (if you buy into SENS Foundation’s rejuvenation paradigm, it’s actually somewhat plausible). But like all new technologies, it is very difficult to produce initially and only gradually becomes more affordable over time.
I would expect people to be willing to pay large sums of money to access such a technology even if they could barely afford it—it’s a matter of life and death after all. This would give the longevity pill an enormous initial price tag. As the price comes down and the pill becomes more widely distributed, GDP receives a big boost since it is calculated using the old price tag, until the reference point resets.
But what if the longevity pill technology does not follow previous trends and just got dumped onto the market at dirt-cheap prices? Not only would it not contribute much to GDP itself, it would also completely collapse the existing healthcare sector and render millions of people unemployed. It might actually register as negative growth.
Finally, consider the possibility that the pill made you immortal straight away. In this case, whatever the initial effects the technology had on the economy, once everybody has undergone the treatment its sales number will go to zero and its manufacturer bankrupt, all while immortality becomes a mere background fact of human existence.
So in conclusion, GDP is more of a measure of economic activity than value, and growth is only a meaningful proxy for progress under the limited context of gradual adoption and improvement of new technologies. In a way, GDP growth has slow takeoff built in as an assumption.
Breadlines do do something. Those in the front tend to become better fed than the rest.
The ultimate question is the one of temporal discounting, and that question depends on how much we do/should value those post-singularity life years. If values can’t shift, then there isn’t really anything to talk about; you just ask yourself how much you value those years, and then move on. But if they can shift, and you acknowledge that they can, then we can discuss some thought experiments and stuff.
I think we’re getting closer to agreement as I’m starting to see what you’re getting at. My comment here would be that yes, your values can shift, and they have shifted after thinking hard about what post-Singularity life will be like and getting all excited. But the shift it has caused is a larger multiplier in front of the temporal discounted integral, not the disabling of temporal discounting altogether.
Actually, I wonder what you think of this. Are you someone who sees death as a wildly terrible thing (I am)?
Yes, but I don’t think there is any layer of reasoning beneath that preference. Evading death is just something that is very much hard-coded into us by evolution.
In the pizza example, I think the value shift would moreso be along the lines of “I was prioritizing my current self too much relative to my future selves”. Presumably, post-dinner-values would be incorporating pre-dinner-self.
I don’t think that’s true. Crucially, there is no knowledge being gained over the course of dinner, only value shift. It’s not like you didn’t know beforehand that pizza was unhealthy, or that you will regret your decision. And if post-dinner self does not take explicit steps to manipulate future value, the situation will repeat itself the next day, and the day after, and so on for hundreds of times.
I think they can inspire you to change your values.
Taken at face value, this statement doesn’t make much sense because it immediately begs the question of change according to what, and in what sense isn’t that change part of your value already. My guess here is that your mental model says something like “there’s a set of primal drives inside my head like eating pizza that I call ‘values’, and then there are my ‘true’ values like a healthy lifestyle which my conscious, rational mind posits, and I should change my primal drives to match my ‘true’ values” (pardon for straw-manning your position, but I need it to make my point).
A much better model in my opinion would be that all these values belong to the same exact category. These “values” or “drives” then duke it out amongst each other, and your conscious mind merely observes and makes up a plausible-sounding socially-acceptable story about your motivations (this is, after all, the evolutionary function of human intelligence in the first place as far as I know), like a press secretary sitting silently in the corner while generals are having a heated debate.
At best, your conscious mind might act as a mediator between these generals, coming up with clever ideas that pushes the Pareto boundary of these competing values so that they can all be satisfied to a greater degree at the same time. Things like “let’s try e-cigarettes instead of regular tobacco—maybe it satisfies both our craving for nicotine and our long-term health!”.
Even high-falutin values like altruism or long-term health are induced by basic drives like empathy and social status. They are no different to, say, food cravings, not even in terms of inferential distance. Compare for instance “I ate pizza, it was tasty and I felt good” with “I was chastised for eating unhealthily, it felt bad”. Is there really any important difference here?
You could of course deny this categorization and insist that only a part of this value set represents your true values. The danger here isn’t that you’ll end up optimizing for the wrong set of values since who’s to tell you what “wrong” is, it’s that you’ll be perpetually confused about why you keeping failing to act upon your declared “true” values—why your revealed preferences through behavior keep diverging from the stated preferences, and you end up making bad decisions. Decisions that are suboptimal even when judged only against your “true” values, because you have not been leveraging your conscious, rational mind properly by giving it bad epistemics.
As an example, consider an immature teenager who doesn’t care at all about his future self and just wants to have fun right now. Would you say, “Well, he values what he values.”?
Haha, unfortunately you posed the question to the one guy out of 100 who would gladly answer “Absolutely”, followed by “What’s wrong with being an immature teenager?”
On a more serious note, it is true that our values often shift over time, but it’s unclear to me why that makes regret minimization the correct heuristic. Regret can occur in two ways: One is that we have better information later in life, along the lines of “Oh I should have picked these numbers in last week’s lottery instead of the numbers I actually picked”. But this is just hindsight and useless to your current self because you don’t have access to that knowledge.
The other is through value shift, along the lines of “I just ate a whole pizza and now that my food-craving brain-subassembly has shut up my value function consists mostly of concerns for my long-term health”. Even setting temporal discounting aside, I fail to see why your post-dinner-values should take precedence over your pre-dinner-values, or for that matter why deathbed-values should take precedence over teenage-values. They are both equally real moments of conscious experience.
But, since we only ever live and make decisions in the present moment, if you happen to have just finished a pizza, you now have the opportunity to manipulate your future values to match your current values by taking actions that makes the salad option more available the next time pizza-craving comes around by e.g. shopping for ingredients. In AI lingo, you’ve just made yourself subagent-stable.
My personal anecdote is that as a teenager I did listen to the “mature adults” to study more and spend less time having fun. It was a bad decision according to both my current values and teenage-values, made out of ignorance about how the world operates.
As a final thought, I would give the meta-advice of not trying to think too deeply about normative ethics. Take AlphaGo as a cautionary tale: after 2000 years of pondering, the deepest truths of Go are revealed to be just a linear combination of a bunch of feature vectors. Quite poetic, if you ask me.
To be sure, I don’t actually think whether Accelerationism is right has any effect on the validity of your points. Indeed, there is no telling whether the AI experts from the surveys even believe in Accelerationism. A fast-takeoff model where the world experiences zero growth from now to Singularity, followed by an explosion of productivity would yield essentially the same conclusions as long as the date is the same, and so does any model in between. But I’d still like to take apart the arguments from Wait But Why just for fun:
First, exponential curves are continuous, they don’t produce singularities. This is what always confused me about Ray Kurzweil as he likes to point to the smooth exponential improvement in computing yet in the next breath predict the Singularity in 2029. You only get discontinuities when your model predicts superexponential growth, and Moore’s law is no evidence for that.
Second, while temporary deviations from the curve can be explained by noise for exponential growth, the same can’t be said so easily for superexponential growth. Here, doubling time scales with the countdown to Singularity, and what can be considered “temporary” is highly dependent on how long we have left to go. If we were in 10,000 BC, a slowing growth rate over half a century could indeed be seen as noise. But if we posit the Singularity at 2060, then we have less than 40 years left. As per Scott Alexander, world GDP doubling time has been increasing since 1960. However you look at it, the trend has been deviating from the ideal curve for far, far too long to be a mere fluke.
The most prominent example of many small S-curves adding up to an overall exponential trend line is, again, Moore’s law. From the inside view, proponents argue that doomsayers are short sighted because they only see the limits of current techniques, but such limits have appeared many times before since the dawn of computing and each time it was overcome by the introduction of a new technique. For instance, most recently, chip manufacturers have been using increasingly complex photolithography masks to print ever smaller features onto microchips using the same wavelengths of UV light, which isn’t sustainable. Then came the crucial breakthrough last year with the introduction of EUV, a novel technique that uses shorter wavelengths and allows the printing of even smaller features with a simple mask, and the refining process can start all over again.
But from the outside view, Moore’s law has buckled (notice the past tense). One by one, the trend lines have flattened out, starting with processor frequency in 2006, and most recently with transistors per dollar (Kurzweil’s favorite metric) in 2018. Proponents of Moore’s law’s validity had to keep switching metrics for 1.5 decades, and they have a few left—transistor density for instance, or TOP500 performance. But the noose is tightening, and some truly fundamental limitations such as the Landauer limit are on the horizon. As I often like to say, when straight lines run into physical limitations, physics wins.
Keep in mind that as far as Moore’s law goes, this is what death looks like. A trend line never halts abruptly, it’s always going to peter out gradually at the end.
By the way, the reason I keep heckling Moore’s law is because Moore’s law itself is the last remnant of the age of accelerating technological progress. Outside the computing industry, things are looking much more dire.
Here are my thoughts. Descriptively, I see that temporal discounting is something that people do. But prescriptively, I don’t see why it’s something that we should do. Maybe I am just different, but when I think about, say, 100 year old me vs current 28 year old me, I don’t feel like I should prioritize that version less. Like everyone else, there is a big part of me that thinks “Ugh, let me just eat the pizza instead of the salad, forget about future me”. But when I think about what I should do, and how I should prioritize future me vs present me, I don’t really feel like there should be discounting.
I’m not sure the prescriptive context is meaningful with regard to values. It’s like having a preference over preferences. You want whatever you want, and what you should want doesn’t matter because you don’t actually want that, wherever that should came from. A useful framework to think about this problem is to model your future self as other people and reduce it to the classic egoism-altruism balance. Would you say perfect altruism is the correct position to adopt? Are you therefore a perfect altruist?
You could make up philosophical thought experiments and such to discover how much you actually care about others, but I bet you can’t just decide to become a perfect altruist no matter how loudly a philosophy professor might scream at you. Similarly, whether you believe temporal discounting to be the right call or not in the abstract, you can’t actually stop doing it; you’re not a perfect altruist with respect to your future selves and to dismiss it would only lead to confusion in my opinion.
I think so. By symmetry, imperfect anti-alignment will destroy almost all the disvalue the same way imperfect alignment will destroy almost all the value. Thus, the overwhelming majority of alignment problems are solved by default with regard to hyperexistential risks.
More intuitively, problems become much easier when there isn’t a powerful optimization process to push against. E.g. computer security is hard because there are intelligent agents out there trying to break your system, not because cosmic rays will randomly flip some bits in your memory.
Thank you for the post, it was quite a nostalgia trip back to 2015 for me because of all the Wait But Why references. However, my impression is that the Kurzweilian Accelerationism school of thought has largely fallen out of favor in transhumanist circles since that time, with prominent figures like Peter Thiel and Scott Alexander arguing that not only are we not accelerating, we can barely even keep up with 19th century humanity in terms of growth rate. Life expectancy in the US has actually gone down in recent years for the first time.
An important consideration that was left out is temporal discounting. Since you assumed linear scaling of value with post-Singularity QALYs, your result is extremely sensitive to your choice of post-Singularity life expectancy. I felt like it was moot to go into such detailed analysis of the other factors when this one alone could easily vary by ten orders of magnitude. By choosing a sufficiently large yet physically plausible number (such as 100 trillion years), you could justify almost any measure to reduce your risk of dying before Singularity and unambiguously resolve e.g. the question of driving risk.
But I doubt that’s a good representation of your actual values. I think you’re much more likely to do exponential discounting of future value, such that the integral of value over time remains finite even in the limit of infinite time. This should lead to much more stable results.
I predict that a lot of people will interpret the claim of “you should expect to live for 10k years” as wacky, and not take it seriously.
Really? This is LessWrong after all^^
Always beware of the spectre of anthropic reasoning though.
I think it’s fairly unlikely that suicide becomes impossible in AI catastrophes. The AI would have to be anti-aligned, which means creating such an AI would require precise targeting in the AI design space the same way a Friendly AI does. However, given the extreme disvalue a hyperexistential catastrophe produces, such scenarios are perhaps still worth considering, especially for negative utilitarians.
Are you sure you’re replying to the right comment?
I think the next big step will be legal rather than technical. Imo the Impossible Burger is already good enough that if it sneaked its way into existing standard fast-food products like Big Macs, most people would neither notice nor care. So in the end it will be a similar issue to GMO foods; its wide-spread adoption depending on whether businesses have to explicitly label plant-based alternatives as alternatives. Defaults really matter.
Doesn’t seem particularly relevant for the purpose of understanding trends, the underlying dynamics aren’t changed by slowing down time.
I strongly suggest looking at world records in TrackMania; it should be an absolute treasure trove of data for this purpose. 15+ years of history over dozens of tracks, with loads of incremental improvements and breakthrough exploits alike.
Here’s an example of one such incredible history:
I think it’s called signal jamming? An alarm that sounds all the time is just as useless as an alarm that never goes off.