agreed. realistically we’d only approach anything resembling WBE by attempting behavior cloning AI, which nicely demonstrates the issue you’d have after becoming a WBE. my point in making this comment is simply that it doesn’t even help in theory, assuming we somehow manage to not make an agent ASI and instead go straight for advanced neuron emulation. if we really, really tried, it is possible to go for WBE first, but at this point it’s pretty obvious we can reach hard ASI without it, so nobody in charge of a team like deepmind is going to go for WBE when they can just focus directly on ai capability plus a dash of safety to make the nerds happy.
agreed. realistically we’d only approach anything resembling WBE by attempting behavior cloning AI, which nicely demonstrates the issue you’d have after becoming a WBE. my point in making this comment is simply that it doesn’t even help in theory, assuming we somehow manage to not make an agent ASI and instead go straight for advanced neuron emulation. if we really, really tried, it is possible to go for WBE first, but at this point it’s pretty obvious we can reach hard ASI without it, so nobody in charge of a team like deepmind is going to go for WBE when they can just focus directly on ai capability plus a dash of safety to make the nerds happy.