There needs to be a sketch of how any of this can actually be done, and I don’t mean the technical side. On the technical side you can just avoid building AI until you really know what you are doing, it’s not a problem with any technical difficulty, but the way human society works doesn’t allow this to be a feasible plan in today’s world.
I definitely agree, Vladimir, I think this “place AI” can be done, but potentially it’ll take longer than agentic AGI. We discussed it recently in this thread, we have some possible UIs of it. I’m a fan of Alan Kay, at Xerox PARC they were writing software for the systems that will only become widespread in the future
Yes, I agree. I think people like shiny new things, so potentially by creating another shiny new thing that is safer, we can steer humanity away from dangerous things towards the safer ones. I don’t want people to abandon their approaches to safety, of course. I just try to contribute what I can, I’ll try to make the proposal more concrete in the future, thank you for suggesting it!
Risk of gradual disempowerment (erosion of control) or short term complete extinction from AI may sound sci-fi if one didn’t live taking the idea seriously for years, but it won’t be solved using actually sci-fi methods that have no prospect of becoming reality. It’s not the consequence that makes a problem important, it is that you have a reasonable attack.
There needs to be a sketch of how any of this can actually be done, and I don’t mean the technical side. On the technical side you can just avoid building AI until you really know what you are doing, it’s not a problem with any technical difficulty, but the way human society works doesn’t allow this to be a feasible plan in today’s world.
I definitely agree, Vladimir, I think this “place AI” can be done, but potentially it’ll take longer than agentic AGI. We discussed it recently in this thread, we have some possible UIs of it. I’m a fan of Alan Kay, at Xerox PARC they were writing software for the systems that will only become widespread in the future
The more radical and further down the road “Static place AI”
(Substantially edited my comment to hopefully make the point clearer.)
Yes, I agree. I think people like shiny new things, so potentially by creating another shiny new thing that is safer, we can steer humanity away from dangerous things towards the safer ones. I don’t want people to abandon their approaches to safety, of course. I just try to contribute what I can, I’ll try to make the proposal more concrete in the future, thank you for suggesting it!