Yeah I thought that might be the case.
The projects goal is the following:
3D-Camera capturing live images of fish
yolov8 pose model detecting fish that are completely visible and not facing the camera, also detecting nose and tail of the fish
using the distance values of 3D-camera to calculate length of fish in mm
running the model on a live video stream to estimate the distribution of fish-lengths in a fish-tank
The problem is the following:
I have annotated images of multiple growth stages of fish, but the average growth stage of the fish in the training data will almost always be either smaller or bigger than the ones im measuring.
So when I’m training a model on all data I have and then running the model in a tank of fish that are at the upper end of growth, than the model will detect the smaller fish inside that tank more often, because most fish in the training data are smaller then the fish in the tank.
Does that make sense?
These values are just to show what I mean (Expecting that the model is always trained on all 5k samples)
Yeah I thought that might be the case.
The projects goal is the following:
The problem is the following:
I have annotated images of multiple growth stages of fish, but the average growth stage of the fish in the training data will almost always be either smaller or bigger than the ones im measuring.
So when I’m training a model on all data I have and then running the model in a tank of fish that are at the upper end of growth, than the model will detect the smaller fish inside that tank more often, because most fish in the training data are smaller then the fish in the tank.
Does that make sense?
These values are just to show what I mean (Expecting that the model is always trained on all 5k samples)