Thanks for the writeup. I feel like there’s been a lack of similar posts and we need to step it up. Maybe the only way for AI Safety to work at all is only to analyze potential vectors of AGI attacks and try to counter them one way or the other. Seems like an alternative that doesn’t contradict other AI Safety research as it requires, I think, entirely different set of skills. I would like to see a more detailed post by “doomers” on how they perceive these vectors of attack and some healthy discussion about them. It seems to me that AGI is not born Godlike, but rather becomes Godlike (but still constrained by physical world) over some time, and this process is very much possible to detect. P.S. I really don’t get how people who know (I hope) that map is not a territory can think that AI can just simulate everything and pick the best option. Maybe I’m the one missing something here?
Thanks for the writeup. I feel like there’s been a lack of similar posts and we need to step it up.
Maybe the only way for AI Safety to work at all is only to analyze potential vectors of AGI attacks and try to counter them one way or the other. Seems like an alternative that doesn’t contradict other AI Safety research as it requires, I think, entirely different set of skills.
I would like to see a more detailed post by “doomers” on how they perceive these vectors of attack and some healthy discussion about them.
It seems to me that AGI is not born Godlike, but rather becomes Godlike (but still constrained by physical world) over some time, and this process is very much possible to detect.
P.S. I really don’t get how people who know (I hope) that map is not a territory can think that AI can just simulate everything and pick the best option. Maybe I’m the one missing something here?