Turn ONNX Trained Pytorch model can be saved as a file, pt on Pytorch own scripts can be converted to ONNX model, this step conversion script is as follows: dummy_input=torch. Randn (1, 3, 64, 64, device='cuda') Model=torch. The load ("./face_emotions_model. Pt ") The output=model (dummy_input) Model. The eval () Model. Cuda () Torch. Onnx. Export (model, dummy_input, "face_emotions_model. Onnx output_names={}" output ", verbose=True) OpenCV calls within DNN ONNX model test Convert ONNX format model, can be invoked directly by OpenCV module within DNN, invocation style is as follows: landmark_net=CV. Within DNN. ReadNetFromONNX (" landmarks_cnn. Onnx ") Image=CV. Imread (" D:/facedb/test/464 JPG ") CV. Imshow (" input ", image) H, w, c=image. Shape Blob=CV. Within DNN. BlobFromImage (image, 0.00392, (64, 64), (0.5, 0.5, 0.5), False)/0.5 Print (blob) Landmark_net. SetInput (blob) Lm_pts=landmark_net. Forward () Print (lm_pts) For x, y in lm_pts: Print (x, y) X1=x * w Y1 =y * hCV. Circle (image, (np. Int32 (x1), np. Int32 (y1), 2, (0, 0, 255), 2, 8, 0) CV. Imshow (" test "five faces, image) CV. Imwrite (" D:/landmark_det_result. PNG ", image) CV. WaitKey (0) CV. DestroyAllWindows () The results are as follows: Turn ONNX IR How to put the ONNX file conversion OpenVINO IR? The answer is through OpenVINO optimizer tool component model, model of OpenVINO optimizer component tools support common Pytorch transformation in the process of the training model and training torchvision migration model, To convert ONNX to IR, you first need to install ONNX components support, run directly OpenVINO pre installation script can obtain support, screenshots are as follows: Then perform the following conversion script: No doubt, the conversion success! Accelerating reasoning Using OpenVINO Inference Engine accelerating reasoning, to get the model through OpenVINO own OpenCV completed calls within DNN installation package, set to accelerate the Inference Engine for Inference Engine, this part of the code is as follows: Within DNN: :.net emtion_net=readNetFromModelOptimizer (emotion_xml emotion_bin); 2 emtion_net. SetPreferableTarget (DNN_TARGET_CPU); 3 emtion_net. SetPreferableBackend (DNN_BACKEND_INFERENCE_ENGINE); Including readNetFromModelOptimizer said using OpenVINO model optimizer to load the file, and use the inference engine reasoning of acceleration, Perform reasoning and output resolution, expression classification results, the code is as follows: The Rect box (x1, y1, x2 - x1, y2 - y1); Mat ROI=frame (box); Mat face_blob=blobFromImage (ROI, 0.00392, the Size (64, 64), a Scalar (0.5, 0.5, 0.5), false, false); Emtion_net. SetInput (face_blob); Mat probs=emtion_net. Forward (); int index=0; Float Max=1; For (int I=0; I & lt; 8; I++) { Const float * scores=probs. Ptr (0, I, 0); Float score=scores [0]; If (Max & lt; Score) { Max=score; The index=I; } } A rectangle (frame, box, Scalar (0, 255, 0)); The final results are as follows: