I wonder what could be the alternative to neural nets. Even the AI-2027 team implied that Agent-5 would have “arguably a completely new paradigm, though neural networks will still be involved”. Suppose that the only alternative to LLMs was something like simulated human or animal brains taught to play, experiment, talk to each other, read books, write essays, draw pictures, do math and coding. Then how plausible is it that simulated animals would also learn human-like values, but not the value of caring about the intellectually weak humans?
Caring about the weak is not a trait that would be expected to naturally arise from ASI. Humans care about the weak because evolutionarily, taking care of the weak in the tribe was beneficial, and so it got “trained” into us (mostly). ASI would not naturally “evolve” to care about the weak unless we give it incentive to if it had an animal like brain.
I wonder what could be the alternative to neural nets. Even the AI-2027 team implied that Agent-5 would have “arguably a completely new paradigm, though neural networks will still be involved”. Suppose that the only alternative to LLMs was something like simulated human or animal brains taught to play, experiment, talk to each other, read books, write essays, draw pictures, do math and coding. Then how plausible is it that simulated animals would also learn human-like values, but not the value of caring about the intellectually weak humans?
Caring about the weak is not a trait that would be expected to naturally arise from ASI. Humans care about the weak because evolutionarily, taking care of the weak in the tribe was beneficial, and so it got “trained” into us (mostly). ASI would not naturally “evolve” to care about the weak unless we give it incentive to if it had an animal like brain.