If we have already found features/vectors for hallucinations, then why haven’t major AI companies tried shifting it downwards when deploying their AIs? Does reducing their strength decrease hallucinations? Is there a reason why using it in practice would not be helpful?
If we have already found features/vectors for hallucinations, then why haven’t major AI companies tried shifting it downwards when deploying their AIs? Does reducing their strength decrease hallucinations? Is there a reason why using it in practice would not be helpful?
Why assume they haven’t?