So I was looking over the recent merges to llama.cpp’s server and saw that they’d more or less brought it in line with Open AI-style APIs – natively – obviating the need for e.g. api_like_OAI.py, or one of the bindings/wrappers like llama-cpp-python (+ooba), koboldcpp, etc. (not that those and others don’t provide great/useful platforms for a wide variety of local LLM shenanigans).

As of a couple days ago (can’t find the exact merge/build), it seems as if they’ve implemented – essentially – the old ‘simple-proxy-for-tavern’ functionality (for lack of a better way to describe it) but *natively*.

As in, you can connect SillyTavern (and numerous other clients, notably hugging face chat-ui — *with local web search*) without a layer of python in between. Or, I guess, you’re trading the python layer for a pile of node (typically) but just above bare metal (if we consider compiled cpp to be ‘bare metal’ in 2023 ;).

Anyway, it’s *fast* — or at least not apparently any slower than it needs to be? Similar pp and generation times to main and the server’s own skeletal js ui in the front-ends I’ve tried.

It seems like ggerganov and co. are getting serious about the server side of llama.cpp, perhaps even over/above ‘main’ or the notion of a pure lib/api. You love to see it. apache/httpd vibes 😈

Couple links:

https://github.com/ggerganov/llama.cpp/pull/4198

https://github.com/ggerganov/llama.cpp/issues/4216

But seriously just try it! /models, /v1, /completion are all there now as native endpoints (compiled in C++ with all the gpu features + other goodies). Boo-ya!

  • aseichter2007B
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    I’m pretty sure that makes it compatible with Clipboard Conqueror too!

  • SatoshiNotMeB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    You mean we don’t need to use llama-cpp-Python anymore to serve this at an OAI-like endpoint?

    • reallmconnoisseurB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Correct. You run llama.cpp server and inside your code/gui whatever you set OpenAI base API to the server’s endpoint.

  • sleeper-2B
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    huge fan of server.cpp too! I actually embed a universal binary (created with lipo) in my macOS app (FreeChat) and use it as an LLM backend running on localhost. Seeing how quickly it improves makes me very happy about this architecture choice.

    I just saw the improvements issue today. Pretty excited about the possibility of getting chat template functionality since currently all of that complexity has to live in my client.

    Also, TIL about the batching stuff. I’m going to try getting multiple responses using that.