An interesting aspect of this “race” is that it’s as much about alignment as it is about capabilities. It seems like the main topic on everyone’s minds right now is the (lack of) correctness of the generated information. The goal “model consistently answers queries truthfully” is clearly highly relevant to alignment.
Although I find this interesting, I don’t find it surprising. Productization naturally forces solving the problem “how do I get this system to consistently do what users want it to do” in a way that research incentives alone don’t.
An interesting aspect of this “race” is that it’s as much about alignment as it is about capabilities. It seems like the main topic on everyone’s minds right now is the (lack of) correctness of the generated information. The goal “model consistently answers queries truthfully” is clearly highly relevant to alignment.
Although I find this interesting, I don’t find it surprising. Productization naturally forces solving the problem “how do I get this system to consistently do what users want it to do” in a way that research incentives alone don’t.