I had a discussion in class with one of my teachers. He says that AI is and can only be always deterministic because “even a deep learning neural network is a set of equations running on a computer, and the stochastic factor is added at the beginning. But the output of a model is always deterministic, even if it’s not interpretable by humans.”
How would you reply? (Possibly with examples and papers)
Tysm!
He’s not wrong, but they’re are lots of things that can throw a wrench into the predictability, for example, if you’re using a hugging face model, and the weights file changes out from under your nose.
Or if the hardware you’re executing on has a bug (like the IEEE floating point issue on 486s back in the day).
Or if the model has the precision reduced or increased by the hardware it’s running on in a significant way.
Or the stochastic random bits are unobservable, etc.
In these cases it still is deterministic, it’s just not easy to determine, especially when small hardware changes (as opposed to algorithmic ones) can change the output.