代码之家  ›  专栏  ›  技术社区  ›  Luis Ramon Ramirez Rodriguez

YOLO在ONNX运行时执行时返回不同的结果

  •  0
  • Luis Ramon Ramirez Rodriguez  · 技术社区  · 3 年前

    我正在尝试复制这个结果 Real-time Detection of Personal-Protective-Equipment (PPE) model 除了代码之外,他们还提供了数据集和训练的模型 google drive

    首先,我用原始模型运行代码,然后用转换后的模型运行代码。问题是,从两个模型得到的结果I-m甚至没有相同的形状,原始模型是一个向量列表,第二个模型只有一个向量,而不是三个

    enter image description here

    目前,我正在谷歌Colab上测试它,代码如下:

    import tensorflow.keras.backend as K
    from tensorflow.keras.layers import Input
    import numpy as np
    import pandas as pd
    import cv2
    import matplotlib.pyplot as plt
    import matplotlib as mpl
    from IPython.display import display, Math
    from time import time
    
    import sys
    sys.path.append('../')
    
    from drive.MyDrive.ONNX_demo.utils.image import letterbox_image, draw_detection
    from drive.MyDrive.ONNX_demo.utils.model import yolo_body
    
    from drive.MyDrive.ONNX_demo.utils.fixes import *
    fix_tf_gpu()
    
    import onnx
    # Compute the prediction with ONNX Runtime
    import onnxruntime as rt
    import numpy
    
    act_img = cv2.imread('drive/MyDrive/ONNX_demo/image/4.jpg')
    image_shape = act_img.shape[:-1]
    img = letterbox_image(act_img, (416,416))/255.
    img = np.expand_dims(img, 0)
    
    '''Show the image'''
    plt.imshow( act_img[:,:,::-1] )
    plt.axis('off')
    plt.show()
    #########################
    ###### Classes
    #########################
    class_names = ['H', 'V', 'W']
    
    anchor_boxes = np.array(
            [
            np.array([[ 76,  59], [ 84, 136], [188, 225]]) /32, # output-1 anchor boxes
            np.array([[ 25,  15], [ 46,  29], [ 27,  56]]) /16, # output-2 anchor boxes
            np.array([[ 5,    3], [ 10,   8], [ 12,  26]]) /8   # output-3 anchor boxes
            ],
            dtype='float64'
        )
    
    input_shape  = (416, 416)
    
    K.clear_session() # clear memory
    
    # number of classes and number of anchors
    num_classes = len(class_names)
    num_anchors = anchor_boxes.shape[0] * anchor_boxes.shape[1]
    
    # input and output
    input_tensor = Input( shape=(input_shape[0], input_shape[1], 3) ) # input
    num_out_filters = ( num_anchors//3 ) * ( 5 + num_classes )        # output
    
    #######
    ## Prediction with the original model 
    ##############
    
    ## Build and load the model 
    model = yolo_body(input_tensor, num_out_filters)
    
    weight_path = 'drive/MyDrive/ONNX_demo/models/pictor-ppe-v302-a1-yolo-v3-weights.h5'
    
    model.load_weights( weight_path )
    yolo_pred = model.predict(img)
    

    enter image description here

    ### Conver
    # save the complete model:
    
    tf.saved_model.save(model, "drive/MyDrive/ONNX_demo/models/save_model")
    
    # convert it to ONNX format: 
    
    python3 -m tf2onnx.convert --saved-model "drive/MyDrive/ONNX_demo/models/save_model" --output "drive/MyDrive/ONNX_demo/models/model.onnx"
    
    ### Inference with ONNX
    sess = rt.InferenceSession("drive/MyDrive/ONNX_demo/models/model.onnx")
    input_name = sess.get_inputs()[0].name
    label_name = sess.get_outputs()[0].name
    yolo_pred =  sess.run([label_name], {input_name: img.astype(np.float32)})[0]
    

    enter image description here

    作为转换的一部分,首先我将模型从HDF5转换为保存格式,以便存储完整的模型,而不仅仅是本文中提到的权重: Unable to convert .h5 model to ONNX for inferencing through any means

    0 回复  |  直到 3 年前