I think it is far more reasonable to operate under the idea that progress might go on forever then the idea that there is a bottom, especially a bottom just barely past what we already know. Seems to me like what you are suggesting is the default position most people throughout history incorrectly suggested.
Also, if there is a bottom, understanding the concept of 0 and 1 doesn’t mean you automatically understand all the concepts which a computer can encode in data. “The concepts that can be generated from 0 and 1 will slow down because of physical limitations” makes no sense to assert.
The bandwidth a single human can learn at is very very tightly constrained. The bandwidth an ASI could generate new meaningful data at is just incomparably larger. No matter how good the explanation is there is a very obvious problem of scale.
How fast can you read with comprehension? Now how fast can you read with comprehension when you don’t know the definitions of all the words? How fast can you read the definitions of the new words in order to move forward with learning the main concept? How many new words are in those definitions which require you to also read more definitions just to understand the parent definition? How much progress has been made on other new concepts while you spent all this time reading? How many definitions have changed before you even get back to the main concept you were learning?
Worrying that ASI will do bad stuff because we told it to without bothering to understand the consequences is pretty far down my list of things to worry about. I can understand “eliminates the world as we know it” without understanding the physics by which it does this. Summaries and simplifications are a thing. I’m gonna ask “so hey what consequences would this have that I’d care about” and the ASI, because it’s super-smart, will answer in terms I can understand.
If it doesn’t, I’ll stick to asking it to do things I can understand. Like improving its ability to summarize and communicate.
You haven’t addressed my point that a smart ASI will be good at summarizing and simplifying.
Maybe you’re not concerned with practical dangers, just the possibility that humans won’t always understand everying ASIs come up with. In which case, that’s fine; I’m worried about everyone dying long before we get the opportunity to be limited by our understanding. Not being able to fully appreciate everything an ASI is coming up with might be a limitation, but it’s far beyond the level of success we can imagine, so I’m putting it in the category of planning the victory party before working out a plan to win.
You haven’t addressed my point that a smart ASI will be good at summarizing and simplifying.
The story itself is entirely about how this doesn’t matter. I also very directly addressed this in more detail in my last reply.
The point I am presenting is one that is more fundamental than all these various topics you are trying to bring in which are not part of my story or my replies. My story is about something that happens somewhere in the path of all outcomes, good or bad. I am unsure why you are trying so hard to dismiss it without addressing my replies, so sure it only happens after success, and so sure there is no practical reason to be thinking about it when trying to understand what is happening, what might happen.
I think it is far more reasonable to operate under the idea that progress might go on forever then the idea that there is a bottom, especially a bottom just barely past what we already know. Seems to me like what you are suggesting is the default position most people throughout history incorrectly suggested.
Also, if there is a bottom, understanding the concept of 0 and 1 doesn’t mean you automatically understand all the concepts which a computer can encode in data. “The concepts that can be generated from 0 and 1 will slow down because of physical limitations” makes no sense to assert.
The bandwidth a single human can learn at is very very tightly constrained. The bandwidth an ASI could generate new meaningful data at is just incomparably larger. No matter how good the explanation is there is a very obvious problem of scale.
How fast can you read with comprehension? Now how fast can you read with comprehension when you don’t know the definitions of all the words? How fast can you read the definitions of the new words in order to move forward with learning the main concept? How many new words are in those definitions which require you to also read more definitions just to understand the parent definition? How much progress has been made on other new concepts while you spent all this time reading? How many definitions have changed before you even get back to the main concept you were learning?
Worrying that ASI will do bad stuff because we told it to without bothering to understand the consequences is pretty far down my list of things to worry about. I can understand “eliminates the world as we know it” without understanding the physics by which it does this. Summaries and simplifications are a thing. I’m gonna ask “so hey what consequences would this have that I’d care about” and the ASI, because it’s super-smart, will answer in terms I can understand.
If it doesn’t, I’ll stick to asking it to do things I can understand. Like improving its ability to summarize and communicate.
You haven’t addressed my point that a smart ASI will be good at summarizing and simplifying.
Maybe you’re not concerned with practical dangers, just the possibility that humans won’t always understand everying ASIs come up with. In which case, that’s fine; I’m worried about everyone dying long before we get the opportunity to be limited by our understanding. Not being able to fully appreciate everything an ASI is coming up with might be a limitation, but it’s far beyond the level of success we can imagine, so I’m putting it in the category of planning the victory party before working out a plan to win.
The story itself is entirely about how this doesn’t matter. I also very directly addressed this in more detail in my last reply.
The point I am presenting is one that is more fundamental than all these various topics you are trying to bring in which are not part of my story or my replies. My story is about something that happens somewhere in the path of all outcomes, good or bad. I am unsure why you are trying so hard to dismiss it without addressing my replies, so sure it only happens after success, and so sure there is no practical reason to be thinking about it when trying to understand what is happening, what might happen.