Worrying that ASI will do bad stuff because we told it to without bothering to understand the consequences is pretty far down my list of things to worry about. I can understand “eliminates the world as we know it” without understanding the physics by which it does this. Summaries and simplifications are a thing. I’m gonna ask “so hey what consequences would this have that I’d care about” and the ASI, because it’s super-smart, will answer in terms I can understand.
If it doesn’t, I’ll stick to asking it to do things I can understand. Like improving its ability to summarize and communicate.
You haven’t addressed my point that a smart ASI will be good at summarizing and simplifying.
Maybe you’re not concerned with practical dangers, just the possibility that humans won’t always understand everying ASIs come up with. In which case, that’s fine; I’m worried about everyone dying long before we get the opportunity to be limited by our understanding. Not being able to fully appreciate everything an ASI is coming up with might be a limitation, but it’s far beyond the level of success we can imagine, so I’m putting it in the category of planning the victory party before working out a plan to win.
You haven’t addressed my point that a smart ASI will be good at summarizing and simplifying.
The story itself is entirely about how this doesn’t matter. I also very directly addressed this in more detail in my last reply.
The point I am presenting is one that is more fundamental than all these various topics you are trying to bring in which are not part of my story or my replies. My story is about something that happens somewhere in the path of all outcomes, good or bad. I am unsure why you are trying so hard to dismiss it without addressing my replies, so sure it only happens after success, and so sure there is no practical reason to be thinking about it when trying to understand what is happening, what might happen.
Worrying that ASI will do bad stuff because we told it to without bothering to understand the consequences is pretty far down my list of things to worry about. I can understand “eliminates the world as we know it” without understanding the physics by which it does this. Summaries and simplifications are a thing. I’m gonna ask “so hey what consequences would this have that I’d care about” and the ASI, because it’s super-smart, will answer in terms I can understand.
If it doesn’t, I’ll stick to asking it to do things I can understand. Like improving its ability to summarize and communicate.
You haven’t addressed my point that a smart ASI will be good at summarizing and simplifying.
Maybe you’re not concerned with practical dangers, just the possibility that humans won’t always understand everying ASIs come up with. In which case, that’s fine; I’m worried about everyone dying long before we get the opportunity to be limited by our understanding. Not being able to fully appreciate everything an ASI is coming up with might be a limitation, but it’s far beyond the level of success we can imagine, so I’m putting it in the category of planning the victory party before working out a plan to win.
The story itself is entirely about how this doesn’t matter. I also very directly addressed this in more detail in my last reply.
The point I am presenting is one that is more fundamental than all these various topics you are trying to bring in which are not part of my story or my replies. My story is about something that happens somewhere in the path of all outcomes, good or bad. I am unsure why you are trying so hard to dismiss it without addressing my replies, so sure it only happens after success, and so sure there is no practical reason to be thinking about it when trying to understand what is happening, what might happen.