I have read the sequences. Since Yudkowsky so thoroughly refuted the idea of reinforcement learning I don’t think that that idea deserves to be regarded as a feasible solution to Friendly AI.
On the other hand I wasn’t particularly aware of the wider AGI movement, so thanks for that. Obviously when I say simultaneous AGI projects, I mean projects that are at a similarly advanced stage of development at that point in time—but your point stands.
I have read the sequences. Since Yudkowsky so thoroughly refuted the idea of reinforcement learning I don’t think that that idea deserves to be regarded as a feasible solution to Friendly AI.
On the other hand I wasn’t particularly aware of the wider AGI movement, so thanks for that. Obviously when I say simultaneous AGI projects, I mean projects that are at a similarly advanced stage of development at that point in time—but your point stands.