Hey everyone,
I made a translation Discord bot for around 30,000 servers, but using paid services like Google Translate or DeepL is too expensive. Instead, I’ve switched to an open-source translation model (currently using m2m100 with 400 million parameters) on a CPU. However, the translations it provides aren’t up to par, and I’ve found that newer models like MADLAD-400 deliver much better results. The catch is that MADLAD-400 is too large to run on a CPU.
I’m looking to deploy this improved model on a GPU, but my budget is a bit tight—around $100-150 per month. Does anyone know of any services or offers that provide GPU access within this price range? Any suggestions or recommendations would be greatly appreciated!
This post is an automated archive from a submission made on /r/MachineLearning, powered by Fediverser software running on alien.top. Responses to this submission will not be seen by the original author until they claim ownership of their alien.top account. Please consider reaching out to them let them know about this post and help them migrate to Lemmy.
Lemmy users: you are still very much encouraged to participate in the discussion. There are still many other subscribers on !machinelearning@academy.garden that can benefit from your contribution and join in the conversation.
Reddit users: you can also join the fediverse right away by getting by visiting https://portal.alien.top. If you are looking for a Reddit alternative made for and by an independent community, check out Fediverser.