Text generated by an LM is not grounded in communicative intent
I think that LLM consists of typical cognitive architecture elements: world model, cost model, executive controller and planner, all amalgamated together in this big mess. From this perspective, fine-tuning LLMs for dialogue I think imparts their executive component with communicative intent.
I think that LLM consists of typical cognitive architecture elements: world model, cost model, executive controller and planner, all amalgamated together in this big mess. From this perspective, fine-tuning LLMs for dialogue I think imparts their executive component with communicative intent.