Link: https://huggingface.co/ajibawa-2023/SlimOrca-13B

This Model is trained on refined version of SlimOrca made available by Open-Orca team. The idea was to check how this Model will perform in the absence of “system” prompt/value. It performs remarkably well. This Model is very good in various types of General Purpose content generation such as Q&A (including multiple choice), Articles from Summary, Sentiment Analysis, Context & Hypothesis, Reviews, Erotic story generation etc. It can also generate Uncensored content. Kindly be careful while generating Uncensored content as you will be responsible for what you generate.

It is trained on 517981 set of conversations. Each set having 2 conversations. I have shared this data.

Entire dataset was trained on Azure 4 x A100 80GB. For 3 epoch, training took almost 11 Days. DeepSpeed codebase was used for training purpose. Entire data is trained on Llama-2 by Meta. This is a full fine tuned model.

All the credit goes to the Open-Orca team for releasing SlimOrca dataset. I am extremely thankful to the Open Source community for sharing knowledge and wisdom.

If there are any mistakes then they are solely mine. I hope you will like it.

Thank you