Home > Mobile >  Close to measure effective | OpenVINO support directly read ONNX file format
Close to measure effective | OpenVINO support directly read ONNX file format

Time:10-12


01 function support
OpenVINO 2020 r04 version supports ONNX directly read the format file, use the functions or to read the IR file before, but when the second parameter is empty by default, will try to read ONNX format file, related function and parameter interpretation is as follows:
 CNNNetwork InferenceEngine: : Core: : ReadNetwork (
Const STD: : string & amp; ModelPath,
Const STD: : string & amp; BinPath={}
) const

Where
ModelPath said model input paths, (XML or. Onnx)
BinPath said IR format data path (*. Bin), if is empty, try to read with modelPath the bin with the same file, if the failure will try loading IR file directly,
02
ResNet model transformation
In here I use pytorch torchvision bringing ResNet18 training model, first convert it from PTH to ONNX format, transformation script is as follows:
 model=torchvision. Models. Resnet18 (pretrained=True). The eval () 
Dummy_input=torch. Randn ((1, 3, 224, 224))
The torch. Onnx. Export (model, dummy_input, "resnet18. Onnx")

Converted into IR intermediate file format, see the following link to go here
From Pytorch ONNX to OpenVINO IR middle-tier

So we get the ONNX format model, the model is generated training on ImageNet dataset, support 1000 categories of image classification, support for the parameters of the image preprocessing and input format is as follows:
The input image: HxW=224 x224,
Channel: RGB three-channel images,
Mean=[0.485, 0.456, 0.406]
STD=[0.229, 0.224, 0.225]
contrast test
Here we use ResNet18 network respectively ONNX format and IR format test on 2020 r04 OpenVINO version, the resulting time is as follows:

It can be seen directly read ONNX executed, that the main bottleneck of this step, the load network corresponding function is:
 ExecutableNetwork InferenceEngine: : Core: : LoadNetwork (
Const CNNNetwork & amp; Network,
Const STD: : string & amp; DeviceName,
Const STD: : map)

Well, when dealing with video or multiple loop invocation model reasoning, the function as part of the initialization step, will only once, don't affect FPS, but the slowest are outrageous! Hope can be in the next version can improve a wave, in reasoning, speed of two format, when dealing with video FPS is basic it is stable, OpenVINO support ONNX model load stability and reasoning, this will be a lot of pytorch developers Gospel, CPU speed model is not a dream, you can see I execution time of reasoning and FPS:

The speed, but also said with? On!
The test code
Image_classification method to load the default execution IR format modification parameter to True to perform reasoning ONNX format, the code is as follows:
 the from __future__ import print_function 
The import cv2
The import numpy as np
The import time
The import of logging as the log
The from openvino. Inference_engine import IECore

With the open (' imagenet_classes. TXT) as f:
Labels=[line strip () for the line in f.r eadlines ()]


Def image_classification (use_onnx=False) :
Model_xml="resnet18. XML"
Model_bin="resnet18. Bin"
Onnx_model="resnet18. Onnx"

# Plugin initialization for specified device and the load extensions library if specified
The info (" Creating Inference Engine ")
Ie=IECore ()
# Read IR
The info (" Loading network files: \ n \ t \ n \ t {} {} ". The format (model_xml model_bin))
Inf_start=time. Time ()
If use_onnx:
# directly using ONNX format loaded
Net=ie. Read_network (model=onnx_model)
The else:
# the IR format loaded
Net=ie. Read_network (model=model_xml, weights=model_bin)
Load_time=time. Time () - inf_start
Print (" read the network time (ms) : %. 3 f "% (load_time * 1000))

The info (" Preparing input blobs ")
Input_blob=next (iter (net. Input_info))
Out_blob=next (iter (net. Outputs))

# Read and pre - process input images'
N, c, h, w=net. Input_info [input_blob] input_data. Shape

SRC=https://bbs.csdn.net/topics/cv2.imread (" D:/images/messi. JPG ")
# image=cv2. Within DNN. BlobFromImage (SRC, 0.00375 (w, h), (123.675, 116.28, 103.53), True)
Image=cv2. Resize (SRC (w, h))
Image=np. Float32 (image)/255.0
Image [: :] -=(np) float32 (0.485), np. Float32 (0.456), np. Float32 (0.406))
Image [: :]/=(np) float32 (0.229), np. Float32 (0.224), np. Float32 (0.225))
Image=image transpose ((2, 0, 1))

# Loading model to the plugin
The info (" Loading model to the plugin ")
Start_load=time. Time ()
Exec_net=ie. Load_network (network=net, device_name="CPU")
End_load=time. Time () - start_load
Print (" load time (ms) : %, 3 f "% (end_load * 1000))

# Start sync inference
The info (" Starting inference in synchronous mode ")
Inf_start1=time. Time ()
Res=exec_net. Infer (inputs={input_blob: [image]})
Inf_end1=time. Time () - inf_start1
Print (" infer onnx as network time (ms) : %, 3 f "% (inf_end1 * 1000))

# Processing output blob
The info (" Processing output blob ")
Res=res [out_blob]
Label_index=np. Argmax (res, 1)
Label_txt=labels [label_index [0]]
Inf_end=time. Time ()
Det_time=inf_end - inf_start1
Inf_time_message="Inference time: {: 3 f} ms, FPS: {: 3 f}". The format (det_time * 1000, 1000/(1000 + 1) det_time *)
Cv2. PutText (SRC, label_txt, (10, 50), cv2. FONT_HERSHEY_SIMPLEX, 1.0, (255, 0, 255), 2, 8)
Cv2. PutText (SRC, inf_time_message, (10, 100), cv2. FONT_HERSHEY_SIMPLEX, 1.0, (0, 0, 255), 2, 8)
Cv2. Imshow (" ResNet18 -from Pytorch image classification ", SRC)
Cv2. WaitKey (0)
Cv2. DestroyAllWindows ()


If __name__=="__main__ ':
Image_classification (True)


Another found that OpenVINO2020 R04 Python version of the SDK, the input and output of the web, no longer USES the inputs and outputs, with what alternative, you can simply look at the code,
  • Related