• ttkciarB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    It is only 1.3B :-) I have noticed that smaller models work a lot better with longer, more detailed prompts (at least 440 characters, better with twice that many).

  • vasileerB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    2 ideas

    - use deepseek-coder-1.3b-instruct not the base model

    - check that you use the correct prompting template for the model

    • East-Awareness-249OPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      It is the instruct model. You can see underneath the prompt box that it’s the deepseek-coder-1.3b-instruct_Q5_K_s model. I used the prompting template in the model, and it slightly improved answers.

      But if I ask if to write some code, it almost never does and says something gibberish.

      Does your GPU/CPU quality affect the AI’s output? My device is potato.