That gives them more different abilities; I don’t think it constitutes a fundamental change to their way of thinking or that it makes them more intelligent. (It doesn’t improve their performance on text based problems significantly.) Because it is just doing the ~same type of “learning” on a different type of data. This doesn’t make them able to discuss say abiogenesis or philosophy with actual critical human-like thought. In these fields they are strictly imitating humans. As in, imagine you replaced all the learning data regarding abiogenesis with plausible-sounding but subtly wrong theories. The LLM would simply slavishly repeat these wrong theories, wouldn’t it?
That gives them more different abilities; I don’t think it constitutes a fundamental change to their way of thinking or that it makes them more intelligent.
(It doesn’t improve their performance on text based problems significantly.)
Because it is just doing the ~same type of “learning” on a different type of data.
This doesn’t make them able to discuss say abiogenesis or philosophy with actual critical human-like thought. In these fields they are strictly imitating humans.
As in, imagine you replaced all the learning data regarding abiogenesis with plausible-sounding but subtly wrong theories. The LLM would simply slavishly repeat these wrong theories, wouldn’t it?