Now that it is the New Year, I made a massive thread on twitter concerning a lot of my own opinionated takes on AI, which to summarize are my lengthening timelines, which correlates to my view that new paradigms for AI are likelier than they used to be and more necessary, which reduces AI safety from our vantage point in expectation, AI will be a bigger political issue than I used to think and depending on how robotics ends up, it might be the case that by 2030 LLMs are just good enough to control robots even if their time horizon for physical tasks is pretty terrible, because you don’t need much long term planning, which would make AI concern/salience go way up, though contra the hopes of a lot of people in AI safety, this almost certainly doesn’t let us reduce x-risk by much, for reasons Anton Leicht talks about here, and many more takes in the full thread above.
But to talk about some takes that didn’t make it into the main Twitter thread, here are some to enjoy:
My current prediction is LLMs top out at somewhat above the capability of a superhuman coder/automated coder as defined by AI 2027/AI Futures Model absent other forces, as this is when constraints on data begin to bind more strongly than they have, slowing down scaling by potentially a lot more absent new paradigms, though it might take until 2032 for the entire data stock to run out.
Most, if not all, futures that are good from a human value perspective and where AIs take over inevitably involve value lock-in/massively slowing down the rate of evolution, because otherwise it’s likely that very bad outcomes for humanity arise. This is relatively easy to do, but quite important to do for human survival, especially in worlds where theory-backed unbounded alignment doesn’t happen in time.
After AGI, power concentration leading to 1980s-2010s China-type government/economics but mixed in with feudalistic elements across the globe, is likely to happen absent massive changes in politics, mainly because AGI plus nanotech lets values and technology/economic systems decouple way more than they currently are. Liberal democracy is dependent on the fact that growth/national power is dependent on giving political freedom to the masses. China only managed to decouple democracy from capitalism for 3 decades, meaning that the orthogonality thesis is false from a large-scale perspective. However, AGI + nanotech pretty much allows you to not give freedom to almost every citizen while still being able to grow economically. That said, as I stated in the Twitter thread, I don’t think this is likely to be a bad thing, because most of the reasons historical dictatorships suck are due to a combination of selection effects for who gets to be powerful, combined with the fact that even a nominal dictator doesn’t rule alone means they have to select people, and the incentives in dictatorship are to select loyal people instead of competent people, but conditional on alignment being solved, this just goes away, and the fact that human returns diminish sufficiently much that selfish preferences are massively easy to satisfy with galaxies at your disposal, but not current economies meaning that it’s very easy for even mild altruistic preferences to dominate and let the citizens have quality of life unheard of in democracies like the US.
More generally, once you are able to go into space and create enough ships such that you can colonize solar systems/galaxies, your civilization is immune to existential threats that rely solely on our known physics, which is basically everything that isn’t using stellar/galactic resources, and this vastly simplifies the coordination problems compared to coordination problems here on Earth.
I instead want space governance to be prioritized more for 2 reasons:
To make moral trade-based futures on our universe more likely, relative to other outcomes. My current view is that given the likelihood of moral relativism (where there’s an infinite number of correct moral views) being rather high in my view relative to moral objectivism, combined with the variability of human values being also large even in the infinitely far future, this means that moral trade becomes much more important in my view to get good outcomes for most humans with an AI-dominated world, relative to a free-for-all regime that locks in wealthy people’s values in power.
To figure out whether or not a theory of multi-agency exists and is coherent, as an instrumental goal to figure out whether acausal trade exists, which if feasible would be quite good for our civilization to do, and this is true even with my low probability of x-risk from space travel that leads to free for all grabbing of reasources, because coordination on a galactic/universal scale would fundamentally allow for certain mega-projects to be done, especially mega-projects that change physics, and it also lets us simplify coordination problems drastically, making x-risk negligible even over the lifetime of the universe.
Now that it is the New Year, I made a massive thread on twitter concerning a lot of my own opinionated takes on AI, which to summarize are my lengthening timelines, which correlates to my view that new paradigms for AI are likelier than they used to be and more necessary, which reduces AI safety from our vantage point in expectation, AI will be a bigger political issue than I used to think and depending on how robotics ends up, it might be the case that by 2030 LLMs are just good enough to control robots even if their time horizon for physical tasks is pretty terrible, because you don’t need much long term planning, which would make AI concern/salience go way up, though contra the hopes of a lot of people in AI safety, this almost certainly doesn’t let us reduce x-risk by much, for reasons Anton Leicht talks about here, and many more takes in the full thread above.
But to talk about some takes that didn’t make it into the main Twitter thread, here are some to enjoy:
My current prediction is LLMs top out at somewhat above the capability of a superhuman coder/automated coder as defined by AI 2027/AI Futures Model absent other forces, as this is when constraints on data begin to bind more strongly than they have, slowing down scaling by potentially a lot more absent new paradigms, though it might take until 2032 for the entire data stock to run out.
Most, if not all, futures that are good from a human value perspective and where AIs take over inevitably involve value lock-in/massively slowing down the rate of evolution, because otherwise it’s likely that very bad outcomes for humanity arise. This is relatively easy to do, but quite important to do for human survival, especially in worlds where theory-backed unbounded alignment doesn’t happen in time.
After AGI, power concentration leading to 1980s-2010s China-type government/economics but mixed in with feudalistic elements across the globe, is likely to happen absent massive changes in politics, mainly because AGI plus nanotech lets values and technology/economic systems decouple way more than they currently are. Liberal democracy is dependent on the fact that growth/national power is dependent on giving political freedom to the masses. China only managed to decouple democracy from capitalism for 3 decades, meaning that the orthogonality thesis is false from a large-scale perspective. However, AGI + nanotech pretty much allows you to not give freedom to almost every citizen while still being able to grow economically. That said, as I stated in the Twitter thread, I don’t think this is likely to be a bad thing, because most of the reasons historical dictatorships suck are due to a combination of selection effects for who gets to be powerful, combined with the fact that even a nominal dictator doesn’t rule alone means they have to select people, and the incentives in dictatorship are to select loyal people instead of competent people, but conditional on alignment being solved, this just goes away, and the fact that human returns diminish sufficiently much that selfish preferences are massively easy to satisfy with galaxies at your disposal, but not current economies meaning that it’s very easy for even mild altruistic preferences to dominate and let the citizens have quality of life unheard of in democracies like the US.
I currently think space governance is unusually underrated in terms of funding/talent relative to it’s attention, and in particular reducing the incentives to rush to grab all the resources in space after AGI is really important, but contra people like Jordan Stone, I believe that even a race to the stars because of AGI almost certainly doesn’t cause an existential risk with probability of more than 0.0000001% percent, and the reason I am this confident is that pretty much all of the sources of x-risk either are too slow-acting to overwhelm defense systems, or rely on physics getting overturned in a way that loosens constraints, which has not been the norm for science progress, and importantly a lot of threat models for how physics alterations to our universe could cause existential risk require huge resources like (stars/galaxies) even for technologically mature civilizations, and superintelligence makes it very easy to coordinate to prevent existential risks that arise from somehow altering physics (since AI evolution is likely massively slowed down and there are likely going to be at most 3-5 AI factions off of our current economic incentives).
More generally, once you are able to go into space and create enough ships such that you can colonize solar systems/galaxies, your civilization is immune to existential threats that rely solely on our known physics, which is basically everything that isn’t using stellar/galactic resources, and this vastly simplifies the coordination problems compared to coordination problems here on Earth.
I instead want space governance to be prioritized more for 2 reasons:
To make moral trade-based futures on our universe more likely, relative to other outcomes. My current view is that given the likelihood of moral relativism (where there’s an infinite number of correct moral views) being rather high in my view relative to moral objectivism, combined with the variability of human values being also large even in the infinitely far future, this means that moral trade becomes much more important in my view to get good outcomes for most humans with an AI-dominated world, relative to a free-for-all regime that locks in wealthy people’s values in power.
To figure out whether or not a theory of multi-agency exists and is coherent, as an instrumental goal to figure out whether acausal trade exists, which if feasible would be quite good for our civilization to do, and this is true even with my low probability of x-risk from space travel that leads to free for all grabbing of reasources, because coordination on a galactic/universal scale would fundamentally allow for certain mega-projects to be done, especially mega-projects that change physics, and it also lets us simplify coordination problems drastically, making x-risk negligible even over the lifetime of the universe.
These are my takes for New Years today.