minus-squareseraschkaOPBtoMachine Learning@academy.garden•[P] Practical Tips for Finetuning LLMs Using LoRA (Low-Rank Adaptation): Things I Learned From Hundreds of ExperimentslinkfedilinkEnglisharrow-up1·1 year ago =256, alpha=64 I think I need glasses 😅. You are right that other ratios worked quite well, too, indeed. I amended this section a bit. linkfedilink
seraschkaB to Machine Learning@academy.gardenEnglish · 1 year ago[P] Practical Tips for Finetuning LLMs Using LoRA (Low-Rank Adaptation): Things I Learned From Hundreds of Experimentsplus-squaremagazine.sebastianraschka.comexternal-linkmessage-square2fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1external-link[P] Practical Tips for Finetuning LLMs Using LoRA (Low-Rank Adaptation): Things I Learned From Hundreds of Experimentsplus-squaremagazine.sebastianraschka.comseraschkaB to Machine Learning@academy.gardenEnglish · 1 year agomessage-square2fedilink
I think I need glasses 😅. You are right that other ratios worked quite well, too, indeed. I amended this section a bit.