Independent philosopher, background in business and philosophy. Interested in justification theory.
Alex Glaucon
i think your questions are intriguing and important. But I’m wondering if intuition pumps etc are the ways to solve them (in general I’m becoming increasingly nervous of thought experiments; it’s not just that our intuitions are unstable but that we’d need to spend a lot of justificatory effort proving that the thought experiment is constructed neutrally. And that’s probably impossible). I‘m particularly interested in your point about neurons and animals. In the absence of any other data neurons might be a good place to start. But the key is to stay open to other ways of measuring things, so that your beliefs remain fully justified. I’ve expanded on this topic here : https://www.lesswrong.com/posts/hcymnEAKtwvED7Y8o/what-are-we-actually-evaluating-when-we-say-a-belief-tracks#comments
Interesting. Do you think AI structurally can’t achieve that status, or it’s just at this moment it can’t get there. I tend to the view that consciousness is just a tool agents who have feedback loops and need to model the behaviour of other agents, have evolved. I don’t see why AI can’t either.
What Are We Actually Evaluating When We Say a Belief “Tracks Truth”?
I think the most obvious reason to believe things are conscious (and indeed for consciousness to have evolved at all) is it’s a very good way to predict the behaviour of others. If I want to know how predator, prey, pack member or even my own child will react to a situation I can use my own conscious experience to guess their next moves. For social mammals this would be highly advantageous. Of course this could just be me inventing a just so story. I’d love to see if it’s more than that.
Thank you. You are right! I unfairly suggested you implied consequentialism was maximising. The deeper point I was trying to make (and I’d be interested to know if you think this is madly naive) is that an intelligent AI would treat human history, literature etc as billions of pieces of data about what works. Much of this it will dismiss as stuff that humans care about because they are twetware with neolithic drives. But there are lessons for AI too. For example, humans get a lot of pleasure from friendship. Could AI too? And these sorts of goals would sit alongside staple production.
I’m interested in why you think consequentialism in necessarily maximising. An AGI might have multiple mutually incompatible goals it it solving for, and choose some balance of those, not maximising on any. Given it will have the whole of human history as training data one of the lessons it will have absorbed is ruthless prioritisation of a single goal tends to provoke counter coalitions. The smart thing to do is manage within an ecosystem of other AI and humans. Not maximise against them (which is a fraught and unstable pattern).
I think you’re right on both counts. I wonder if designers of AI will actively work to avoid creating consciousness (assuming they can work out how to do that) to avoid the issue you raise, as well as sidestepping wider ethical concerns. A conscious AI would be over-engineered for many tasks.