I know the typical answer is “no because all the libs are in python”… but I am kind of baffled why more porting isn’t going on especially to Go given how Go like Python is stupid easy to learn and yet much faster to run. Truly not trying to start a flame war or anything. I am just a bigger fan of Go than Python and was thinking coming in to 2024 especially with all the huge money in AI now, we’d see a LOT more movement in the much faster runtime of Go while largely as easy if not easier to write/maintain code with. Not sure about Rust… it may run a little faster than Go, but the language is much more difficult to learn/use but it has been growing in popularity so was curious if that is a potential option.

There are some Go libs I’ve found but the few I have seem to be 3, 4 or more years old. I was hoping there would be things like PyTorch and the likes converted to Go.

I was even curious with the power of the GPT4 or DeepSeek Coder or similar, how hard would it be to run conversions between python libraries to go and/or is anyone working on that or is it pretty impossible to do so?

  • Dry-Vermicelli-682OPB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Burn looks interesting. So I think I am just lacking the understanding of how this all works. I assumed there is code that is used in AI that handles the models… some sort of way to use the models as “data” but the NLP, AI “logic” brain, etc would be done in code. I assumed that that is largely the python code. I assumed that models were more or less data that a runtime AI engine uses to find answers to the questions asked, thus thought the model runners handled the NLP work and turned incoming queries in to some model specific format that allows the algorithms of the model to do what they do… e.g. return responses and answers as if a human replied. I assumed ALL that was tons and tons of code done in python, and thus, was thinking if that is the runtime “brain” that AI uses, then wouldn’t it run even faster in Go or Rust or something.

    I am sadly not sure if I am explaining this right. I just assumed there was likely millions of lines of code behind the AI “brain” and that the model was basically gobs of data in some sort of… for lack of a better word compressed database format. So when “training” occurs… I am not entirely clear what is going on, other than it takes a ton of compute and it results in a single .gguf or similar file that is the model that can then be loaded by the likes of ollama, etc and then queried against by users using plain english. The code behind the training, the code behind running a model… that is what I am foggy on. Is there code IN the model… in binary format or something along with ALL the data it draws from?

    I originally thought AI would use the internet in real time… but that would clearly take a LOT longer for AI to search the web for answers and then formulate some sort of intelligent response rather than just some sort of paste of snippets it finds.