That’s fair. I guess I’m worried that forecasting only teaches a type of calibration that doesn’t necessarily generalize broadly? Much to think about...
For it to generalize broadly you could forecast events rather broadly. For each medical history of a patient you can forecast how it progresses. For each official government statistics you can forecast how it evolves. For each forward looking statement in a companies earnings call you can try to make it specific and forecast. For each registered clinical trial you can forecast trial completion and outcomes based on trial completion.
xAI can forecast all sorts of different variables about it’s users. Will a given user post more or less on politics in the future? Will the move left or right politically?
When it comes to coding AIs you can predict all sorts of questions about how a code based will evolve in the future. You can forecast whether or not unit tests will fail after a given change.
Whenever you ask the AI to make decisions that have external consequences you can make it forecast the consequences.
(What I’m writing here has obvious implications for building capabilities, but I would expect people at the labs to be smart enough to have these thoughts on their own—if there’s anyone who thinks I shouldn’t write like this please tell me)
That’s fair. I guess I’m worried that forecasting only teaches a type of calibration that doesn’t necessarily generalize broadly? Much to think about...
For it to generalize broadly you could forecast events rather broadly. For each medical history of a patient you can forecast how it progresses. For each official government statistics you can forecast how it evolves. For each forward looking statement in a companies earnings call you can try to make it specific and forecast. For each registered clinical trial you can forecast trial completion and outcomes based on trial completion.
xAI can forecast all sorts of different variables about it’s users. Will a given user post more or less on politics in the future? Will the move left or right politically?
When it comes to coding AIs you can predict all sorts of questions about how a code based will evolve in the future. You can forecast whether or not unit tests will fail after a given change.
Whenever you ask the AI to make decisions that have external consequences you can make it forecast the consequences.
(What I’m writing here has obvious implications for building capabilities, but I would expect people at the labs to be smart enough to have these thoughts on their own—if there’s anyone who thinks I shouldn’t write like this please tell me)