I’m wrote a book about why all knowledge of the truth is fundamentally uncertain. Give it a read!
I’ve also written a lot about AI safety, personal development, and Buddhism, among other things. Most of that’s here on LessWrong, but there’s also some on my blog, Uncertain Updates.
I was just thinking about how this pattern might apply to software engineering, and I’m starting to suspect that it largely doesn’t.
Here’s my thinking. I use AI a lot to do things like brainstorm solutions. Rather than me sitting around trying to think of ways to solve a problem, I describe the problem to an AI, and get it to give me ways it thinks the problem might be solved. Now, I don’t always take one of its options, but the process of asking it is usually enough to get me to think of how I want to solve the problem, and sometimes I get lucky and it proposes the right thing to do.
This seems analogous on the surface to just making the moves the AI suggests, but in practice I think it’s not, because we’ve been doing a similar move in software engineer for years as a way of learning. It’s pretty normal, historically, to ask a more senior engineer to figure out how to solve a problem, and then a more junior engineer implements it. Part of the point of this, beyond specialization, is that the junior learns via imitation from the senior. This admittedly doesn’t always work, but the approach is structurally the same as playing with a Go AI assistant, and it seems perhaps a lot could be learned about software engineering by using an AI to help bootstrap oneself towards making good decisions and having good judgement.