All of my philosophy here actually comes from trying to figure out how to build a self-modifying AI that applies its own reasoning principles to itself in the process of rewriting its own source code. So whenever I talk about using induction to license induction, I’m really thinking about an inductive AI considering a rewrite of the part of itself that performs induction. If you wouldn’t want the AI to rewrite its source code to not use induction, your philosophy had better not label induction as unjustifiable.
This changes the way I see Rationality A-Z, no longer thinking it simply you communicating your philosophy when you made these posts, but actually using it in order to make something even greater than a post could ever be.
This changes the way I see Rationality A-Z, no longer thinking it simply you communicating your philosophy when you made these posts, but actually using it in order to make something even greater than a post could ever be.