I just checked and while the other answers are perfect, math.log(2)**math.exp(2) is 0.06665771193088375. ChatGPT is off by almost an order of magnitude when given a quantitative question it can’t look up in its training data.
Yep. 2⁄3 is still beyond most human savants, but it is a failure that the machine won’t try to do “mental math” to see that it’s answer is off by a lot.
Obviously future versions of the product will just have isolated/containerized Linux terminals and python interpreters they can query so a temporary problem.
I just checked and while the other answers are perfect, math.log(2)**math.exp(2) is 0.06665771193088375. ChatGPT is off by almost an order of magnitude when given a quantitative question it can’t look up in its training data.
Yep. 2⁄3 is still beyond most human savants, but it is a failure that the machine won’t try to do “mental math” to see that it’s answer is off by a lot.
Obviously future versions of the product will just have isolated/containerized Linux terminals and python interpreters they can query so a temporary problem.