While instructing fine tuning mistral, what are the parameters that needs to be set for the tokenizer?
what is the default eos token for Mistral and padding should left or right?

I’ve exhausted all the online articles trying to find this. Please help. I’m instruction fine tuning Base Mistral for Text to SQL task.

  • weedyuhOPB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Thank you! I used the for fine tuning the base mistral but during inferencing, the answer didn’t stop generating with the answer sql query. It didn’t generate the end token after the answer and the result went on and on in an an auto completion format.

    I didn’t think to use the instruct model for this! I will give that a try and let you know in a few days.