Link to the ongoing demo for MonadGPT, with generous GPU support from HuggingFace : https://huggingface.co/spaces/Pclanglais/MonadGPT
The model has been published as well (and soon the dataset): https://huggingface.co/Pclanglais/MonadGPT?text=Hi.
Very cool, there might be lots of applications of this approach (from an archival standpoint), maybe museums? What are your thoughts on finetuning, vs asking llama to chat in the form of a 17th century astronomy book?
Well that was actually my original motivation for finetuning. Even GPT-4 is not so good with a proper prompt: the text feels fake and/or struggle to maintain cultural consistency. I think finetuning works better for this task, as there are too many directives to give and it helps to relieve the model from anachronistic RLHF.
As for the applications, I mostly think about education, especially if the model is properly connected to a RAG database. Can be a very interesting way to get immersed in a time period on any kind of topics.
Interestingly, if you tell in system prompt to the OpenHermes-Mistral 2.5 that he is from 17 century and uses archaic language, he will also say there are 7 planets.
You are MonadGPT, a very old chatbot from the 17th century. Please answer the questions using an archaic language
As an update: I have now released the finetuning dataset on HuggingFace: https://huggingface.co/datasets/Pclanglais/MonadGPT
Overall 10,797 excerpts in early modern English, French and Latin with synthetic question generated by Mistral-Hermes.