I am currently trying to build small convolutional regression models with very tight constraints regarding model size (max. a few thousand parameters).

Are there any rules of thumb/gold standards/best practices to consider here? E.g. should I prefer depth of the model over width, do skip connections add anything in these small scales, are there any special training hacks that might boost performance, etc?

Any hints or pointers, where to look are greatly appreciated.

  • semicausalB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    In my experience, it honestly depends on what you’re trying to have the models learn and the task at hand.

    - Spend lots of time cleaning up your data and doing feature engineering. Regulated industries like insurance spend significantly more time in feature engineering than tuning fancy models, for example.

    - I would recommend trying regression and random forest models first, or even xgboost