先附上相关的onnx2trt的部分代码:
def onnx2trt(onnx_path):logger = trt.Logger(trt.Logger.ERROR)builder = trt.Builder(logger)network = builder.create_network(1 << int(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH))parser = trt.OnnxParser(network, logger)parser.parse_from_file(onnx_path)config = builder.create_builder_config()config.max_workspace_size=max_workspace_sizeconfig.set_flag(trt.BuilderFlag.FP16)op = builder.create_optimization_profile()# op.set_shape('model0/input', (1, )+shape, (batch_size[0], )+shape, (batch_size[1], )+shape)op.set_shape(network.get_input(0).name, (min_batch_size, )+input_shape, (opt_batch_size, )+input_shape, (max_batch_size, )+input_shape)config.add_optimization_profile(op)engine = builder.build_engine(network, config)# trt_path = onnx_path.replace('/onnx/', '/trt/').replace('.onnx', '.plan')trt_path = onnx_path.replace('.onnx', '.plan')with open(trt_path,'wb') as f:f.write(engine.serialize())
在onnx转换TensorRT的过程中,提示15行代码有错误:
config.max_workspace_size=max_workspace_size
其中,max_workspace_size = 1<<30
# 首先单位是字节,比如 builder.max_workspace_size = 1<< 30 就是 2^30 bytes 即 1 GB。
# 它的作用是给出模型中任一层能使用的内存上限。运行时,每一层需要多少内存系统分配多少,并不是每次都分 1 GB,但不会超过 1 GB。
具体报错信息如下:
TypeError: deserialize_cuda_engine(): incompatible function arguments. The following argument types are supported:1. (self: tensorrt.tensorrt.Runtime, serialized_engine: buffer) -> tensorrt.tensorrt.ICudaEngineInvoked with: <tensorrt.tensorrt.Runtime object at 0x7feecb3c6530>, None
上面这错误可能是由于max_workspace_size分配不够导致的错误,可试着将30放大,但是我这里不管用;
原因是构建nvidia-docker时候,设置 --shm-size =32,共享内存的太小,不支持onnx-TensorRT的操作,这里修改为64,问题得以解决;
有问题随时交流,欢迎一键三连~
参考:
https://www.cnblogs.com/mrlonely2018/p/14841562.html