minus-squareseraschkaOPBtoMachine Learning@academy.garden•[P] Practical Tips for Finetuning LLMs Using LoRA (Low-Rank Adaptation): Things I Learned From Hundreds of ExperimentslinkfedilinkEnglisharrow-up1·2 years ago =256, alpha=64 I think I need glasses 😅. You are right that other ratios worked quite well, too, indeed. I amended this section a bit. linkfedilink
seraschkaB to Machine Learning@academy.gardenEnglish · 2 years ago[P] Practical Tips for Finetuning LLMs Using LoRA (Low-Rank Adaptation): Things I Learned From Hundreds of Experimentsplus-squaremagazine.sebastianraschka.comexternal-linkmessage-square2linkfedilinkarrow-up11arrow-down10
arrow-up11arrow-down1external-link[P] Practical Tips for Finetuning LLMs Using LoRA (Low-Rank Adaptation): Things I Learned From Hundreds of Experimentsplus-squaremagazine.sebastianraschka.comseraschkaB to Machine Learning@academy.gardenEnglish · 2 years agomessage-square2linkfedilink
I think I need glasses 😅. You are right that other ratios worked quite well, too, indeed. I amended this section a bit.