Sorry, that was the wrong link. I was more thinking of the $34k/year income required to be in the top 1%.
But $870k is less than the price of a house in SF.
I too read Accelerando.
But I don’t think this future is terribly likely. It’s either human annihilation or massive cosmic endowment of wealth. The idea that we somehow end up on the knife-edge of survival our resources slowly dwindling requires that r* is too finely tuned to exactly 0.
You are correct. Free trade in general produces winners/losers and while on average people become better off there is no guarantee that individuals will become richer absent some form of redistribution.
In practice humans have the ability to learn new skills/shift jobs so we mostly ignore the redistribution part, but in an absolute worst case there should be some kind of UBI to accommodate the losers of competition with AGI (perhaps paid out of the “future commons” tax).
you should expect to update in the direction of the truth as the evidence comes in
I think this was addressed later on, but this is not at all true. With the waterfall example, every mile that passes without a waterfall you update downwards, but if there’s a waterfall at the very end you’ve been updating against the truth the whole time.
Another case. Suppose you’re trying to predict the outcome of a Gaussian random walk. Each step, you update whatever direction the walk took, but 50% of these steps are “away” from the truth.
you probably shouldn’t be able to predict that this pattern will happen to you in the future.
Again addressed later on, but one can easily come up with stories in which one predictably updates either “in favor of” or “against” AI doom.
Suppose you think there’s a 1% chance of AI doom every year, and AI Doom will arrive by 2050 or never. Then you predictably update downwards every year (unless Doom occurs).
Suppose on the other hand that you expect AI to stagnate at some level below AGI, but if AGI is developed then Doom occurs with 100% certainty. Then each year AI fails to stagnate you update upwards (until AI actually stagnates).
F(a) is the set of futures reachable by agent a at some intial t=0. F_b(a) is the set of futures reachable at time t=0 by agent a if agent b exists. There’s no way for F_b(a) > F(a), since creating agent b is under our assumptions one of the things agent a can do.
Here is a too long writeup of the math I was suggesting.
Obviously we want 1) “actually be helpful”.
Clearly there’s some tension between “I want to shut down if the user wants me to shut down” and “I want to be helpful so that the user doesn’t want to shut me down”, but I don’t weak indifference is a correct way to frame this tension.
As a gesture at the correct math, imagine there’s some space of possible futures and some utility function related to the user request. Corrible AI should define a tradeoff between the number of possible futures its actions affect and the degree to which it satisfies its utility function. Maximum corrigibility {C=1} is the do-nothing state (no effect on possible futures). Minimum corrigibility {C=0} is maximizing the utility function without regard to side-effects (with all the attendant problems such as convergent instrumental goals, etc). Somewhere between C=0 and C=1 is useful corrigible AI. Ideally we should be able to define intermediate values of C in such a way that we can be confident the actions of corrigible AI are spatially and temporally bounded.
The difficultly principally lies in the fact that there’s no such thing as “spatially and temporally bounded”. Due to the Butterfly Effect any action at all affects everything in the future light-cone of the agent. In order to come up with a sensible notion of boundless, we need to define some kind of metric on the space of possible futures, ideally in terms like “an agent could quickly undo everything I’ve just done”. At this point we’ve just recreated agent foundations, though.
I don’t think we want corrigible agents to be indifferent to being shut down. I think corrigible agents should want to be shut down if their users want to shut them down.
I’m not particularly interested in arguing about this 1 video. I want to know where are the other 4999 videos.
There are internal military investigations. The military released some data but not all that it has. The military doesn’t like Russia/China to learn about its exact camera capabilities so doesn’t seem to publically release its highest-resolution videos
The military is very bad at keeping secrets. And surely not all 5000 of the highly believable UFO reports occurred within the US military.
I am aware of this one video of a blurry blob showing up on radar. What I am not aware of is 5000 UFO sightings with indisputable physical evidence.
Where are the high resolution videos? Where are the spectrographs of “impossible alien metals”? Where are the detailed studies of time and location of each encounter trying to treat it as an actual scientific phenomena?
Basically, where are the 5000 counterexamples to this comic?
fixed
Game theory says that humans need to work in coalitions and make allies because no individual human is that much more powerful than any other. With agents that can self improve and self replicate, I don’t think that holds.
Even if agents can self-replicate, it makes no sense to run GPT-5 on every single micro-processer on Earth. This implies we will have a wide variety of different agents operating across fundamentally different scales of “compute size”. For math reasons, the best way to coordinate a swarm of compute-limited agents is something that looks like free-market capitalism.
One possible worry is that humans will be vastly out-competed by future life forms. But we have a huge advantage in terms of existing now. Compounding interest rates imply that anyone alive today will be fantastically wealthy in a post-singularity world. Sure, some people will immediately waste all of that, but as long as at least some humans are “frugal”, there should be more than enough money and charity to go around.
I don’t really have much to say about the “troublemaker” part, except that we should do the obvious things and not give AI command and control of nuclear weapons. I don’t really believe in gray-goo or false-vacuum or anything else that would allow a single agent to destroy the entire world without the rest of us collectively noticing and being able to stop them (assuming cooperative free-market supporting agents always continue to vastly [100x+] outnumber troublemakers).
Yeah, I should have double-checked.
Editing post to reflect the correct values. Does not affect the “two decades” bottom line conclusion.
/s Yeah the 20th century was really a disaster for humanity. It would be terrible if capitalism and economic development were to keep going like this.
So the timeline goes something like:
Dumb human (this was GPT-3.5)
Average-ish human but book smart (GPT-4/AutoGPT)
Actually intelligent human (smart grad student-ish)
Von Neumann (smartest human ever)
Super human (but not yet super-intelligent)
Super-intelligent
Dyson sphere of computronium???
By the time we get the first Von-Neumann, every human on earth is going to have a team of 1000′s of AutoGPTs working for them. The person who builds the first the first Von-Neumann level AGI doesn’t get to take over the world because they’re outnumbered 70 trillion to one.
The ratio is a direct consequence of the fact that it is much cheaper to run an AI than to train one. There are also ecological reasons why weaker agents will out-compute stronger ones. Big models are expensive to run and there’s simply no reason why you would use an AI that costs $100/hour to run for most tasks when one that costs literally pennies can do 90% as good of a job. This is the same reason why bacteria >> insects >> people. There’s no method whereby humans could kill every insect on earth without killing ourselves as well.
See also: why AI X-risk stories always postulate magic like “nano-technology” or “instantly hack every computer on earth”.
I’m claiming we never solve the problem of building AI’s that “lase” in the sense of being able to specify an agent that achieves a goal at some point in the far future. Instead we “stumble through” by iteratively making more and more powerful agents that satisfy our immediate goals and game theory/ecological considerations mean that no single agent every takes control of the far future.
Does that make more sense?
I think this is a strawman of LPE. People who point out you need real world experience don’t say that you need 0 theory, but that you have to have some contact with reality, even in deadly domains.
Outside of a handful of domains like computer science and pure mathematics, contact with reality is necessary because the laws of physics dictate that we can only know things up to a limited precision. Moreover, it is the experience of experts in a wide variety of domains that “try the thing out and see what happens” is a ridiculously effective heuristic.
Even in mathematics, the one domain where LPE should in principal be unnecessary, trying things out is one of the main ways that mathematicians gain intuitions for what new results are/aren’t likely to hold.
I also note that your post doesn’t give a single example of a major engineering/technology breakthrough that was done without LPE (in a domain that interacts with physical reality).
This is literally the one specific thing LPE advocates think you need to learn from experience about, and you’re just asserting it as true?
To summarize:
Domains where “pure thought” is enough:
toy problems
limited/no interaction with the real world
solution/class of solutions known in advance
Domains where LPE is necessary:
too complicated/messy to simulate
depends on precise physical details of the problem
even a poor approximation to solution not knowable in advance