My understanding of LLM function calling is roughly as follows:
- You “list” all the functions the model can call in the prompt
- ???
- The model knows when to return the “function names” (either in json or otherwise) during conversation
Does anyone have any advice or examples on what prompt should I use?