My understanding of LLM function calling is roughly as follows:

  1. You “list” all the functions the model can call in the prompt
  2. ???
  3. The model knows when to return the “function names” (either in json or otherwise) during conversation

Does anyone have any advice or examples on what prompt should I use?