Hi,

I was using my search engine to look for available Emacs integrations for the open (and local) https://gpt4all.io/ when I realized that I could not find a single one.

Is there somebody who’s using GPT4All with Emacs already and did not publish his/her integration?

  • karthinkB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I can add this to gptel quite easily, but I can’t find the instructions on how to use it. Does it use a local http server? Where can I find these details?

    • publicvoitOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Hi,

      I personally would not have expected that the desktop app doesn’t have to run in background anyway. ;-)

      Any “gtp4all.el”-like mode would help me in writing my queries in Emacs as well as receiving its output directly into Emacs (babel/org-mode preferred, I suppose). Currently, I do a lot of copy&paste for that purpose.

      • karthinkB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        In that case you can use it right now with gptel, which supports an Org interface for chat.

        Enable the server mode in the desktop app, and in Emacs, run

        (setq-default gptel-model "gpt4all-j-v1.3-groovy"
                      gptel-host "http://localhost:4891/v1"
                      gptel-api-key "--")
        

        Then you can spawn a dedicated chat buffer with M-x gptel or chat from any buffer by selecting a region of text and running M-x gptel-send.

        • publicvoitOPB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Great news - will try in the next days. Thank you.

          • karthinkB
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            In the meantime I added explicit support for GPT4All, the above instructions may be incorrect by the time you get to it. The Readme should have updated instructions (if it mentions support for local LLMs at all).