rglullis@communick.newsEnglish · 20 days agoOpenAI's nightmare: Deepseek R1 on a Raspberry Piplus-squarewww.youtube.comexternal-linkmessage-square0fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1external-linkOpenAI's nightmare: Deepseek R1 on a Raspberry Piplus-squarewww.youtube.comrglullis@communick.newsEnglish · 20 days agomessage-square0fedilink
rglullis@communick.newsEnglish · 5 months agoBuild a Fully Local RAG App With PostgreSQL, Mistral, and Ollamaplus-squarewww.timescale.comexternal-linkmessage-square0fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1external-linkBuild a Fully Local RAG App With PostgreSQL, Mistral, and Ollamaplus-squarewww.timescale.comrglullis@communick.newsEnglish · 5 months agomessage-square0fedilink
zerokerimBEnglish · 1 year agoJailbreak prompts for Llama ?plus-squaremessage-squaremessage-square2fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1message-squareJailbreak prompts for Llama ?plus-squarezerokerimBEnglish · 1 year agomessage-square2fedilink
learning_hedonismBEnglish · 1 year agoBest open/commercial model that is tuned on ChatGPT4?plus-squaremessage-squaremessage-square1fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1message-squareBest open/commercial model that is tuned on ChatGPT4?plus-squarelearning_hedonismBEnglish · 1 year agomessage-square1fedilink
LivingDraculaBEnglish · 1 year agoJust curious, are there any GUIs for a creating LLaMA2 architecture similar to how OpenAi does "custom GPTs"?plus-squaremessage-squaremessage-square2fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1message-squareJust curious, are there any GUIs for a creating LLaMA2 architecture similar to how OpenAi does "custom GPTs"?plus-squareLivingDraculaBEnglish · 1 year agomessage-square2fedilink
sandys1BEnglish · 1 year agowhich is the best model (finetuned or base) to extract structured data from a bunch of text?plus-squaremessage-squaremessage-square6fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1message-squarewhich is the best model (finetuned or base) to extract structured data from a bunch of text?plus-squaresandys1BEnglish · 1 year agomessage-square6fedilink
Shoddy_Vegetable_115BEnglish · 1 year agoIs RAG better with fine tuning on same data or pure RAG FTW?plus-squaremessage-squaremessage-square1fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1message-squareIs RAG better with fine tuning on same data or pure RAG FTW?plus-squareShoddy_Vegetable_115BEnglish · 1 year agomessage-square1fedilink
kadhi_chawal2BEnglish · 1 year agoHow to start red teaming on llms ?plus-squaremessage-squaremessage-square1fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1message-squareHow to start red teaming on llms ?plus-squarekadhi_chawal2BEnglish · 1 year agomessage-square1fedilink
ForsookComparisonBEnglish · 1 year agoCheapest GPU/Way to run 30b or 34b "Code" Models with GPT4ALL?plus-squaremessage-squaremessage-square1fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1message-squareCheapest GPU/Way to run 30b or 34b "Code" Models with GPT4ALL?plus-squareForsookComparisonBEnglish · 1 year agomessage-square1fedilink
currytrash97BEnglish · 1 year agoA100 inference is much slower than expected with small batch sizeplus-squaremessage-squaremessage-square2fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1message-squareA100 inference is much slower than expected with small batch sizeplus-squarecurrytrash97BEnglish · 1 year agomessage-square2fedilink
Grouchy-Mail-2091BEnglish · 1 year agoA new dataset for LLM training has been released!plus-squaremessage-squaremessage-square2fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1message-squareA new dataset for LLM training has been released!plus-squareGrouchy-Mail-2091BEnglish · 1 year agomessage-square2fedilink
Secret_Joke_2262BEnglish · 1 year agoHow to install llama.cpp version for Qwen72B?plus-squaremessage-squaremessage-square1fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1message-squareHow to install llama.cpp version for Qwen72B?plus-squareSecret_Joke_2262BEnglish · 1 year agomessage-square1fedilink
Nix_The_FurryBEnglish · 1 year agoNous-Hermes-2-Visionplus-squaremessage-squaremessage-square1fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1message-squareNous-Hermes-2-Visionplus-squareNix_The_FurryBEnglish · 1 year agomessage-square1fedilink
oobabooga4BEnglish · 1 year agoQuIP#: SOTA 2-bit quantization method, now implemented in text-generation-webui (experimental)plus-squaregithub.comexternal-linkmessage-square6fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1external-linkQuIP#: SOTA 2-bit quantization method, now implemented in text-generation-webui (experimental)plus-squaregithub.comoobabooga4BEnglish · 1 year agomessage-square6fedilink
PuzzledWhereas991BEnglish · 1 year agoIs m1 max macbook pro worth?plus-squaremessage-squaremessage-square3fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1message-squareIs m1 max macbook pro worth?plus-squarePuzzledWhereas991BEnglish · 1 year agomessage-square3fedilink
fluffywuffie90210BEnglish · 1 year agoAnyone running 3 gpus? Looking for advice on best x670 that might be able to slot a third card on.plus-squaremessage-squaremessage-square3fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1message-squareAnyone running 3 gpus? Looking for advice on best x670 that might be able to slot a third card on.plus-squarefluffywuffie90210BEnglish · 1 year agomessage-square3fedilink
noobgolangBEnglish · 1 year agoThis model is extremely goodplus-squaremessage-squaremessage-square15fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1message-squareThis model is extremely goodplus-squarenoobgolangBEnglish · 1 year agomessage-square15fedilink
Clark9292BEnglish · 1 year agoPolitically balanced chat model?plus-squaremessage-squaremessage-square7fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1message-squarePolitically balanced chat model?plus-squareClark9292BEnglish · 1 year agomessage-square7fedilink
fakezetaBEnglish · 1 year agoOptimum Intel OpenVino Performanceplus-squaremessage-squaremessage-square4fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1message-squareOptimum Intel OpenVino Performanceplus-squarefakezetaBEnglish · 1 year agomessage-square4fedilink
roll_left_420BEnglish · 1 year agoI refuse to believe my MacBook M1 Pro is faster than my 2070 8Gb Super + i7 8gen (both have 16Gb ram)plus-squaremessage-squaremessage-square2fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1message-squareI refuse to believe my MacBook M1 Pro is faster than my 2070 8Gb Super + i7 8gen (both have 16Gb ram)plus-squareroll_left_420BEnglish · 1 year agomessage-square2fedilink