• 1 Post
  • 6 Comments
Joined 1 year ago
cake
Cake day: October 31st, 2023

help-circle
  • Naiw80OPBtoLocalLLaMA@poweruser.forumAre 7b models useful?
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Update on this topic…

    I realised I’ve made some mistakes, the reason to start with I asked about 7b models is because the computer I’m using is resource constrained (and normally I use a frontend for the actual interaction)

    But because I only have 8GB RAM in the computer I decided to go with llama.cpp and this is obviously where things went wrong.

    First of all I obviously messed up the prompt, not that I notice any significant difference now when I realised but it did not follow the expected format for the model I was using.

    But the key thing appeared to be I’ve been using the -i (interactive) argument and thought it would work like a chat session, well it appears to do for a few queries but as stated in the original post then all of sudden the model starts to converse with itself (filling in for my queries etc).
    But it turns out I should have used --instruct all along, and after I realised now things started to work a lot better (although not perfect).

    Finally I decided to give neural-chat a try and dang it appears to do most things I ask it to with great success.

    Thanks all for your feedback and comments.







  • I’m so baffled this has not been realised by people before, it’s so obvious and it’s not the first time in history it happens either.

    First of all Max Tegmark, it’s not even the slightest suspicious that his “non profit” organisation received millions of donations from Elon Musk? I have not figured out what Elons stake in this is yet but I have absolutely no doubt in my mind it’s economical, basically everything he ever did and said has been to manipulate the Stockmarket etc, I doubt that changed recently.

    Then you have OpenAI, that first and formost is everything but Open and very ProprietaryAI nowadays, and what seriously annoys me is that OpenAI in particular been “teasing” about “AGI in n days” etc on several occasions for what purpose if not to manipulate expectations and investors, yet they are one of the most driving in this matter- are people really that stupid that they can’t put together 1 and 1?