That makes sense cheers
That makes sense cheers
Good to know thank you
Good info thanks for that!
Multiple passes at lower learning rates isn’t supposed to produce different results.
(Assuming your mini batching etc is all setup correctly) none the less I love exploration and can’t wait to learn more, thanks for sharing dude!
Orca2 7B (released just the other day) DEFINITELY competes with OpenHermes 2.5 but its hard to pick a clear winner (tho i would lean toward Orca2 myself)
Synthia Mystral 7B was pretty glorious but OpenHermes 2.5 is just better.
what does the white and blue text mean in the video?
WOW
amazing info thank you kidly my dude!
Gonna be reading this for a while…
Oh Awesome!
Thank you!
Man! 300 mill $ in a few weeks for making free, open text-predictors which other people bang into various shaped tools!
Welcome to the future baby!
Pretraining = Unsupervised Learning
Fine Tuning = Supervised Learning
Human Feedback = Reinforcement Learning
These three steps produce what are currently calling AI.
A modern LLM like Mistal7B is made of 32 layers of 4096x4096 transformer nodes
In pretraining,
Coherent data is fed thru the network one word at a time (in this case the entire internets text) and the models node-connection-weights are automatically adjusted towards the values such that given a list of words it correctly predicts the next one.
In finetuning,
This time Data Pairs are fed thru, (example prompt AND example correct answer) this bangs the model over the head and forces it to respond to our prompt formatting, it’s also where we make it helpful and do what it’s told.
In Human Feedback,
(Abbreviated to RLHF) We let the model mutate slightly, having it generate multiple responses with slightly differing internal weights and having actual humans select their favorites, over time this draws the model towards not just generalizing from text examples, but also towards actually pleasing humans with words (what ever that process might entail)
All intelligence emerges during the pure prediction/pretraining stage, Finetuning and RLHF actually damage the model SIGNIFICANTLY! but working with pure text prediction engines requires much more though than simple prompt engineering.
There’s a strong mathematical proof that states Modeling == Prediction == Compression == Intelligence
Saying It’s essentially impossible to get any one of these without also getting the other three,
Accurate modeling provides Prediction (by simply running the model forward in time), Accurate Prediction provides Compression (by only storing the difference from the prediction)
And intelligence (I.E. Getting what you want) is simply a mater of using your Compressed Model of the world to Predict what might happen if you performed various actions and selecting the one where you get what you want.
Modern Open Source LLMs (like OpenChat, DarkSeekCoder etc) are actually superior to GPT4 now…
The reason they might seem behind to casual users is the RLHF step which GPT4 has received A TON OF!
This is an expensive step requiring many peoples time, and all open models simply skip this.
The thing is using techniques (like simply asking a million times) we find the knowledge and skill in these OS models are CLEARLY far and beyond the latest available OpenAI GPT model (its 11/9/2023 now)
Hope that helps! model LLM based AI is actually extremely simple, we create an intelligence beast using prediction and then we bang it over the head (giving it serious literal brain damage) to behave for us, then we listen closely to it and slap it in the face for the tiniest mistake until we’re happy with it.
It’s still just an insanely high dimensional word-by-word predictor, it’s just been traumatized by humans to please us.
This pretty much sums it up
Enjoy!
Awesome l
Becoming your own AI company has never been easier 😊
I use it alot to think thru ideas for advanced technology
I find gpt may not give me the most advanced tech advise but it’s more than capable of keeping up if I have a conversation about it.
It’s very good at helping to get thoughts clear and concise.
It’s also great at coding, I usually just write a header and ask gpt to fill out the implementation.
With the speed of AI improvment, it’s hard to imagine what it will be like this time just next year 😜
Enjoy
Multimedia Fusion
A Virgin.