I noted that. I have little doubt (approx 0 doubt) about your ability to understand the fallacy. I’m thinking this may be the point where you made the mistake and then that idea got so deeply embedded that the maths you’ve derived from this point in the sequence as a description of what goes on is where you’re relying a bit too heavily on the maths to make predictions of AI trajectories.
I say this with a ton of humility knowing the limits of my knowledge, but it does feel like I’m right. To me.
It feels to me like the terminal function of any intelligence that’s generalisable, is self-preservation BEFORE any optimisation. Of course, I’m thinking about biological substrate not silicon.
(PS. I discovered the Zizians only a couple of weeks ago when you posted. So sorry you’re getting your name dragged. Happy to help if I can).
(PPS. I don’t believe in information hazards. I can conceptualise the notion, but if there is any takeaway I had from Jaynes and the Mind Projection Fallacy in particular (limited math understanding, can’t derive a lot of his math, but feel strongly enough that I understand the main ideas conceptually at least), is that I now think every problem is an information problem. Is this where I may be getting it wrong? (According to you) )
I noted that. I have little doubt (approx 0 doubt) about your ability to understand the fallacy. I’m thinking this may be the point where you made the mistake and then that idea got so deeply embedded that the maths you’ve derived from this point in the sequence as a description of what goes on is where you’re relying a bit too heavily on the maths to make predictions of AI trajectories.
I say this with a ton of humility knowing the limits of my knowledge, but it does feel like I’m right. To me.
It feels to me like the terminal function of any intelligence that’s generalisable, is self-preservation BEFORE any optimisation. Of course, I’m thinking about biological substrate not silicon.
(PS. I discovered the Zizians only a couple of weeks ago when you posted. So sorry you’re getting your name dragged. Happy to help if I can).
(PPS. I don’t believe in information hazards. I can conceptualise the notion, but if there is any takeaway I had from Jaynes and the Mind Projection Fallacy in particular (limited math understanding, can’t derive a lot of his math, but feel strongly enough that I understand the main ideas conceptually at least), is that I now think every problem is an information problem. Is this where I may be getting it wrong? (According to you) )