关键词:Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0!
报错:
[34m[1mONNX:[0m export failure ❌ 3.8s: Expected all tensors to be on
the same device, but found at least two devices, cpu and cuda:0! (when
checking argument for argument other in method wrapper__equal)
[34m[1mTensorRT:[0m export failure ❌ 3.9s: Expected all tensors to
be on the same device, but found at least two devices, cpu and cuda:0!
(when checking argument for argument other in method wrapper__equal)
Traceback (most recent call last): File
“/home/nvidia/ZED2i/ros2_ws/src/export.py”, line 14, in
main() File “/home/nvidia/ZED2i/ros2_ws/src/export.py”, line 9, in main
model.export(format=‘engine’, opset=11) # creates ‘yolov8n.engine’ File
“/home/nvidia/.local/lib/python3.8/site-packages/ultralytics/engine/model.py”,
line 310, in export
return Exporter(overrides=args, _callbacks=self.callbacks)(model=self.model) File “/home/nvidia/.local/lib/python3.8/site-packages/torch/autograd/grad_mode.py”,
line 27, in decorate_context
return func(*args, **kwargs) File “/home/nvidia/.local/lib/python3.8/site-packages/ultralytics/engine/exporter.py”,
line 252, in call
f[1], _ = self.export_engine() File “/home/nvidia/.local/lib/python3.8/site-packages/ultralytics/engine/exporter.py”,
line 122, in outer_func
raise e File “/home/nvidia/.local/lib/python3.8/site-packages/ultralytics/engine/exporter.py”,
line 117, in outer_func
f, model = inner_func(*args, **kwargs) File “/home/nvidia/.local/lib/python3.8/site-packages/ultralytics/engine/exporter.py”,
line 586, in export_engine
f_onnx, _ = self.export_onnx() File “/home/nvidia/.local/lib/python3.8/site-packages/ultralytics/engine/exporter.py”,
line 122, in outer_func
raise e File “/home/nvidia/.local/lib/python3.8/site-packages/ultralytics/engine/exporter.py”,
line 117, in outer_func
f, model = inner_func(*args, **kwargs) File “/home/nvidia/.local/lib/python3.8/site-packages/ultralytics/engine/exporter.py”,
line 333, in export_onnx
torch.onnx.export( File “/home/nvidia/.local/lib/python3.8/site-packages/torch/onnx/init.py”,
line 319, in export
return utils.export(model, args, f, export_params, verbose, training, File
“/home/nvidia/.local/lib/python3.8/site-packages/torch/onnx/utils.py”,
line 113, in export
_export(model, args, f, export_params, verbose, training, input_names, output_names, File
“/home/nvidia/.local/lib/python3.8/site-packages/torch/onnx/utils.py”,
line 737, in _export
params_dict = torch._C._jit_pass_onnx_deduplicate_initializers(graph, params_dict,
RuntimeError: Expected all tensors to be on the same device, but found
at least two devices, cpu and cuda:0! (when checking argument for
argument other in method wrapper__equal)
do_constant_folding=False, # WARNING: DNN inference with torch>=1.12 may require do_constant_folding=False