By not using LLMs to do the modelling. Use specialized models for data analysis and use an LLM to orchestrate those models and communicate with the user. LLMs are not cheap to run, though, so you may want to do a cost/benefit analysis.
If money was no object I’d just build myself a data centre with entire clusters full of specialized tensor processors. I’d even have a gaming/editing cluster connected to my house via optical fibre and high speed wireless so I can do everything I want on thin clients with no latency. Hell if money was no object I’d probably hire the best scientists and engineers in the world and surpass Google and openai.
Reinforcement learning.