Home > Software design >  Deploying Pytorch only for prediction
Deploying Pytorch only for prediction

Time:05-19

I've trained my model locally and now I want to use it in my Kubernetes cluster. Unfortunately, all the Docker images for Pytorch are 5 GBs because they contain the scripts for training which I won't need now. I've created my own image which is only 3.5 GBs but still huge. Is there a slim Pytorch version for predictions? If not, which parts of the package can I safely remove and how?

CodePudding user response:

No easy answer for Python version of PyTorch unfortunately (or at least none I’m aware of).

Python, in general, is not well-suited for Docker deployments as it carries over the dependencies (even if you don't need all of their functionality, imports are often at the top of the file making your aforementioned removal infeasible for projects of PyTorch size and complexity).

There is a way out though...

torchscript

Given your trained model you can convert it to traced/scripted version (see here). After you manage that:

Inference in other languages

Write your inference code in another language, either Java or C (see here for more info).

I have only used C , but you might get there easier with Java, I think.

Results

Managed to get PyTorch for CPU inference to roughly ~32MB, GPU would weight more and be way more complex though and would probably need ~1GB of CUDNN dependency itself.

C way

Please note torchlambda project is not currently maintained and I’m the creator, hopefully it gives you some tips at least.

See:

Additional notes:

  • It also uses AWS SDKs and you would have to remove them from at least these files
  • You don't need static compilation - it would help to reach the lowest possible (I could come up with) image size, but not strictly necessary (additional ‘100MB’ or so)

Final

  • Try Java first as it’s packaging is probably saner (although final image would probably be a little bigger)
  • The C way not tested for the newest PyTorch version and might be subject to change with basically any release
  • In general it takes A LOT of time and debugging, unfortunately.
  • Related