Home > Software design >  How do you specify the bfloat16 mixed precision with the Intel Extension for PyTorch?
How do you specify the bfloat16 mixed precision with the Intel Extension for PyTorch?

Time:10-19

I would like to know how to use mixed precision with PyTorch and Intel Extension for PyTorch.

I have tried to look at the documentation on their GitHub, but I can't find anything that specifies how to go from fp32 to blfoat16.

CodePudding user response:

The IPEX GitHub might not be the best place to look for API documentation. I would try and use the PyTorch IPEX page, which includes examples of API applications.

This would be an example of how to use fp32

model, optimizer = ipex.optimize(model, optimizer, dtype=torch.float32)

This would be an example of how to use bfloat16

model, optimizer = ipex.optimize(model, optimizer, dtype=torch.bfloat16)

  • Related