They can if researchers (intentionally or accidentally) turn the SL/UL system into a goal based agent. For example, imagine a SayCan-like system which uses a language model to create plans, and then a robotic system to execute those plans. I’m personally not sure how likely this is to happen by accident, but I think this is very likely to happen intentionally anyway.
They can if researchers (intentionally or accidentally) turn the SL/UL system into a goal based agent. For example, imagine a SayCan-like system which uses a language model to create plans, and then a robotic system to execute those plans. I’m personally not sure how likely this is to happen by accident, but I think this is very likely to happen intentionally anyway.