RandLA-Net导出onnx模型并使用onnxruntime推理

首先下载RandLA-Net工程:https://github.com/tsunghan-wu/RandLA-Net-pytorch

导出onnx模型

import torch
from utils.config import ConfigSemanticKITTI as cfg
from network.RandLANet import Networkmodel = Network(cfg)
checkpoint = torch.load("./pretrain_model/checkpoint.tar")
model.load_state_dict(checkpoint['model_state_dict'])input = {}
input['xyz'] = [torch.zeros([1, 45056, 3]), torch.zeros([1, 11264, 3]), torch.zeros([1, 2816, 3]), torch.zeros([1, 704, 3])]
input['neigh_idx'] = [torch.zeros([1, 45056, 16], dtype=torch.int64), torch.zeros([1, 11264, 16], dtype=torch.int64), torch.zeros([1, 2816, 16], dtype=torch.int64), torch.zeros([1, 704, 16], dtype=torch.int64)]
input['sub_idx'] = [torch.zeros([1, 11264, 16], dtype=torch.int64), torch.zeros([1, 2816, 16], dtype=torch.int64), torch.zeros([1, 704, 16], dtype=torch.int64), torch.zeros([1, 176, 16], dtype=torch.int64)]
input['interp_idx'] = [torch.zeros([1, 45056, 1], dtype=torch.int64), torch.zeros([1, 11264, 1], dtype=torch.int64), torch.zeros([1, 2816, 1], dtype=torch.int64), torch.zeros([1, 704, 1], dtype=torch.int64)]
input['features'] = torch.zeros([1, 3, 45056])
input['labels'] = torch.zeros([1, 45056], dtype=torch.int64)
input['logits'] = torch.zeros([1, 19, 45056])torch.onnx.export(model, input, "randla-net.onnx", opset_version=13)

onnx模型结构如下:
在这里插入图片描述

onnxruntime推理

python代码

import pickle
import numpy as np
import torch
from network.RandLANet import Network
from utils.data_process import DataProcessing as DP
from utils.config import ConfigSemanticKITTI as cfgnp.random.seed(0)
k_n = 16 
num_points = 4096 * 11 
num_layers = 4
num_classes = 19
sub_sampling_ratio = [4, 4, 4, 4]  if __name__ == '__main__':net = Network(cfg).to(torch.device("cpu"))checkpoint = torch.load("pretrain_model/checkpoint.tar", map_location=torch.device('cpu'))net.load_state_dict(checkpoint['model_state_dict'])points = np.load('./data/08/velodyne/000000.npy') possibility = np.zeros(points.shape[0]) * 1e-3 #[np.random.rand(points.shape[0]) * 1e-3]min_possibility = [float(np.min(possibility[-1]))]probs = [np.zeros(shape=[points.shape[0], num_classes], dtype=np.float32)]test_probs = probstest_smooth = 0.98import onnxruntime     onnx_session = onnxruntime.InferenceSession("randla-net.onnx", providers=['CPUExecutionProvider'])input_name = []for node in onnx_session.get_inputs():input_name.append(node.name)output_name = []for node in onnx_session.get_outputs():output_name.append(node.name)net.eval()with torch.no_grad():with open('./data/08/KDTree/000000.pkl', 'rb') as f:tree = pickle.load(f)pc = np.array(tree.data, copy=False)labels = np.zeros(np.shape(pc)[0])    while np.min(min_possibility) <= 0.5:cloud_ind = int(np.argmin(min_possibility))pick_idx = np.argmin(possibility)        center_point = pc[pick_idx, :].reshape(1, -1)selected_idx = tree.query(center_point, num_points)[1][0]selected_pc = pc[selected_idx]selected_labels = labels[selected_idx]   dists = np.sum(np.square((selected_pc - pc[pick_idx])), axis=1)delta = np.square(1 - dists / np.max(dists))possibility[selected_idx] += delta  # possibility[193] += delta[1], possibility[20283] += delta[45055]min_possibility[cloud_ind] = np.min(possibility)batch_pc = np.expand_dims(selected_pc, 0)batch_label = np.expand_dims(selected_labels, 0)batch_pc_idx = np.expand_dims(selected_idx, 0)batch_cloud_idx = np.expand_dims(np.array([cloud_ind], dtype=np.int32), 0)features = batch_pcinput_points, input_neighbors, input_pools, input_up_samples = [], [], [], []for i in range(num_layers):neighbour_idx = DP.knn_search(batch_pc, batch_pc, k_n)sub_points = batch_pc[:, :batch_pc.shape[1] // sub_sampling_ratio[i], :]pool_i = neighbour_idx[:, :batch_pc.shape[1] // sub_sampling_ratio[i], :]   up_i = DP.knn_search(sub_points, batch_pc, 1)input_points.append(batch_pc)input_neighbors.append(neighbour_idx)input_pools.append(pool_i)input_up_samples.append(up_i)batch_pc = sub_pointsflat_inputs = input_points + input_neighbors + input_pools + input_up_samplesflat_inputs += [features, batch_label, batch_pc_idx, batch_cloud_idx]batch_data, inputs = {}, {}    batch_data['xyz'] = []for tmp in flat_inputs[:num_layers]:batch_data['xyz'].append(torch.from_numpy(tmp).float())inputs['xyz.1'] = flat_inputs[:num_layers][0].astype(np.float32)inputs['xyz.2'] = flat_inputs[:num_layers][1].astype(np.float32)inputs['xyz.3'] = flat_inputs[:num_layers][2].astype(np.float32)inputs['xyz'] = flat_inputs[:num_layers][3].astype(np.float32)batch_data['neigh_idx'] = []for tmp in flat_inputs[num_layers: 2 * num_layers]:batch_data['neigh_idx'].append(torch.from_numpy(tmp).long())inputs['neigh_idx.1'] = flat_inputs[num_layers: 2 * num_layers][0].astype(np.int64)inputs['neigh_idx.2'] = flat_inputs[num_layers: 2 * num_layers][1].astype(np.int64)inputs['neigh_idx.3'] = flat_inputs[num_layers: 2 * num_layers][2].astype(np.int64)inputs['neigh_idx'] = flat_inputs[num_layers: 2 * num_layers][3].astype(np.int64)batch_data['sub_idx'] = []for tmp in flat_inputs[2 * num_layers:3 * num_layers]:batch_data['sub_idx'].append(torch.from_numpy(tmp).long())inputs['8'] = flat_inputs[2 * num_layers:3 * num_layers][0].astype(np.int64)inputs['9'] = flat_inputs[2 * num_layers:3 * num_layers][1].astype(np.int64)inputs['10'] = flat_inputs[2 * num_layers:3 * num_layers][2].astype(np.int64)inputs['11'] = flat_inputs[2 * num_layers:3 * num_layers][3].astype(np.int64)batch_data['interp_idx'] = []for tmp in flat_inputs[3 * num_layers:4 * num_layers]:batch_data['interp_idx'].append(torch.from_numpy(tmp).long())inputs['12'] = flat_inputs[3 * num_layers:4 * num_layers][0].astype(np.int64)inputs['13'] = flat_inputs[3 * num_layers:4 * num_layers][1].astype(np.int64)inputs['14'] = flat_inputs[3 * num_layers:4 * num_layers][2].astype(np.int64)inputs['15'] = flat_inputs[3 * num_layers:4 * num_layers][3].astype(np.int64)batch_data['features'] = torch.from_numpy(flat_inputs[4 * num_layers]).transpose(1, 2).float()inputs['input.1'] = np.swapaxes(flat_inputs[4 * num_layers], 1, 2).astype(np.float32)batch_data['labels'] = torch.from_numpy(flat_inputs[4 * num_layers + 1]).long()inputs['17'] = flat_inputs[4 * num_layers + 1].astype(np.int64)input_inds = flat_inputs[4 * num_layers + 2]cloud_inds = flat_inputs[4 * num_layers + 3]for key in batch_data:if type(batch_data[key]) is list:for i in range(num_layers):batch_data[key][i] = batch_data[key][i]else:batch_data[key] = batch_data[key]end_points = net(batch_data)outputs = onnx_session.run(None, inputs)      end_points['logits'] = end_points['logits'].transpose(1, 2).cpu().numpy()for j in range(end_points['logits'].shape[0]):probs = end_points['logits'][j]inds = input_inds[j]c_i = cloud_inds[j][0]test_probs[c_i][inds] = test_smooth * test_probs[c_i][inds] + (1 - test_smooth) * probs #19  (45056, 19)for j in range(len(test_probs)): pred = np.argmax(test_probs[j], 1).astype(np.uint32) + 1output = np.concatenate((points, pred.reshape(-1, 1)), axis=1)np.savetxt('./result/output.txt', output)

C++代码:

#include <iostream>
#include <fstream>
#include <vector>
#include <algorithm>
#include <pcl/io/pcd_io.h>
#include <pcl/point_types.h>
#include <pcl/search/kdtree.h> 
#include <pcl/common/distances.h>
#include <onnxruntime_cxx_api.h>#include "knn_.h"const int k_n = 16;
const int num_classes = 19;
const int num_points = 4096 * 11;
const int num_layers = 4;
const float test_smooth = 0.98;std::vector<std::vector<long>> knn_search(pcl::PointCloud<pcl::PointXYZ>::Ptr& support_pts, pcl::PointCloud<pcl::PointXYZ>::Ptr& query_pts, int k)
{float* points = new float[support_pts->size() * 3];for (size_t i = 0; i < support_pts->size(); i++){points[3 * i + 0] = support_pts->points[i].x;points[3 * i + 1] = support_pts->points[i].y;points[3 * i + 2] = support_pts->points[i].z;}float* queries = new float[query_pts->size() * 3];for (size_t i = 0; i < query_pts->size(); i++){queries[3 * i + 0] = query_pts->points[i].x;queries[3 * i + 1] = query_pts->points[i].y;queries[3 * i + 2] = query_pts->points[i].z;}long* indices = new long[query_pts->size() * k];cpp_knn_omp(points, support_pts->size(), 3, queries, query_pts->size(), k, indices);std::vector<std::vector<long>> neighbour_idx(query_pts->size(), std::vector<long>(k));for (size_t i = 0; i < query_pts->size(); i++){for (size_t j = 0; j < k; j++){neighbour_idx[i][j] = indices[k * i + j];}}return neighbour_idx;
}int main()
{Ort::Env env(ORT_LOGGING_LEVEL_WARNING, "randla-net");Ort::SessionOptions session_options;session_options.SetIntraOpNumThreads(1);session_options.SetGraphOptimizationLevel(GraphOptimizationLevel::ORT_ENABLE_EXTENDED);const wchar_t* model_path = L"randla-net.onnx";Ort::Session session(env, model_path, session_options);Ort::AllocatorWithDefaultOptions allocator;std::vector<const char*>  input_node_names;for (size_t i = 0; i < session.GetInputCount(); i++){input_node_names.push_back(session.GetInputName(i, allocator));}std::vector<const char*> output_node_names;for (size_t i = 0; i < session.GetOutputCount(); i++){output_node_names.push_back(session.GetOutputName(i, allocator));}float x, y, z;pcl::PointCloud<pcl::PointXYZ>::Ptr points(new pcl::PointCloud<pcl::PointXYZ>);std::ifstream infile_points("000000.txt");while (infile_points >> x >> y >> z){points->push_back(pcl::PointXYZ(x, y, z));}std::vector<float> possibility(points->size(), 0);std::vector<float> min_possibility = { 0 };std::vector<std::vector<float>> test_probs(points->size(), std::vector<float>(num_classes, 0));pcl::PointCloud<pcl::PointXYZ>::Ptr pc(new pcl::PointCloud<pcl::PointXYZ>);std::ifstream infile_pc("000000.pkl", std::ios::binary);while (infile_pc >> x >> y >> z){pc->push_back(pcl::PointXYZ(x, y, z));}std::vector<float> labels(pc->size(), 0);while (*std::min_element(min_possibility.begin(), min_possibility.end()) < 0.5){int cloud_ind = std::min_element(min_possibility.begin(), min_possibility.end()) - min_possibility.begin();int pick_idx = std::min_element(possibility.begin(), possibility.end()) - possibility.begin();pcl::PointXYZ center_point = pc->points[pick_idx];pcl::search::KdTree<pcl::PointXYZ>::Ptr kdtree(new pcl::search::KdTree<pcl::PointXYZ>);kdtree->setInputCloud(pc);std::vector<int> selected_idx(num_points);std::vector<float> distances(num_points);kdtree->nearestKSearch(center_point, num_points, selected_idx, distances);pcl::PointCloud<pcl::PointXYZ>::Ptr selected_pc(new pcl::PointCloud<pcl::PointXYZ>);pcl::copyPointCloud(*pc, selected_idx, *selected_pc);std::vector<float> selected_labels(num_points);for (size_t i = 0; i < num_points; i++){selected_labels[i] = labels[selected_idx[i]];}std::vector<float> dists(num_points);for (size_t i = 0; i < num_points; i++){dists[i] = pcl::squaredEuclideanDistance(selected_pc->points[i], pc->points[pick_idx]);}float max_dists = *std::max_element(dists.begin(), dists.end());std::vector<float> delta(num_points);for (size_t i = 0; i < num_points; i++){delta[i] = pow(1 - dists[i] / max_dists, 2);possibility[selected_idx[i]] += delta[i];}min_possibility[cloud_ind] = *std::min_element(possibility.begin(), possibility.end());pcl::PointCloud<pcl::PointXYZ>::Ptr features(new pcl::PointCloud<pcl::PointXYZ>);pcl::copyPointCloud(*selected_pc, *features);std::vector<pcl::PointCloud<pcl::PointXYZ>::Ptr> input_points;std::vector<std::vector<std::vector<long>>> input_neighbors, input_pools, input_up_samples;for (size_t i = 0; i < num_layers; i++){std::vector<std::vector<long>> neighbour_idx = knn_search(selected_pc, selected_pc, k_n);pcl::PointCloud<pcl::PointXYZ>::Ptr sub_points(new pcl::PointCloud<pcl::PointXYZ>);std::vector<int> index(selected_pc->size() / 4);std::iota(index.begin(), index.end(), 0);pcl::copyPointCloud(*selected_pc, index, *sub_points);std::vector<std::vector<long>> pool_i(selected_pc->size() / 4);std::copy(neighbour_idx.begin(), neighbour_idx.begin() + selected_pc->size() / 4, pool_i.begin());std::vector<std::vector<long>> up_i = knn_search(sub_points, selected_pc, 1);input_points.push_back(selected_pc);input_neighbors.push_back(neighbour_idx);input_pools.push_back(pool_i);input_up_samples.push_back(up_i);selected_pc = sub_points;}const size_t xyz1_size = 1 * input_points[0]->size() * 3;std::vector<float> xyz1_values(xyz1_size);for (size_t i = 0; i < input_points[0]->size(); i++){xyz1_values[3 * i + 0] = input_points[0]->points[i].x;xyz1_values[3 * i + 1] = input_points[0]->points[i].y;xyz1_values[3 * i + 2] = input_points[0]->points[i].z;}std::vector<int64_t> xyz1_dims = { 1, (int64_t)input_points[0]->size(), 3 };auto xyz1_memory = Ort::MemoryInfo::CreateCpu(OrtArenaAllocator, OrtMemTypeDefault);Ort::Value xyz1_tensor = Ort::Value::CreateTensor<float>(xyz1_memory, xyz1_values.data(), xyz1_size, xyz1_dims.data(), xyz1_dims.size());const size_t xyz2_size = 1 * input_points[1]->size() * 3;std::vector<float> xyz2_values(xyz2_size);for (size_t i = 0; i < input_points[1]->size(); i++){xyz2_values[3 * i + 0] = input_points[1]->points[i].x;xyz2_values[3 * i + 1] = input_points[1]->points[i].y;xyz2_values[3 * i + 2] = input_points[1]->points[i].z;}std::vector<int64_t> xyz2_dims = { 1, (int64_t)input_points[1]->size(), 3 };auto xyz2_memory = Ort::MemoryInfo::CreateCpu(OrtArenaAllocator, OrtMemTypeDefault);Ort::Value xyz2_tensor = Ort::Value::CreateTensor<float>(xyz2_memory, xyz2_values.data(), xyz2_size, xyz2_dims.data(), xyz2_dims.size());const size_t xyz3_size = 1 * input_points[2]->size() * 3;std::vector<float> xyz3_values(xyz3_size);for (size_t i = 0; i < input_points[2]->size(); i++){xyz3_values[3 * i + 0] = input_points[2]->points[i].x;xyz3_values[3 * i + 1] = input_points[2]->points[i].y;xyz3_values[3 * i + 2] = input_points[2]->points[i].z;}std::vector<int64_t> xyz3_dims = { 1, (int64_t)input_points[2]->size(), 3 };auto xyz3_memory = Ort::MemoryInfo::CreateCpu(OrtArenaAllocator, OrtMemTypeDefault);Ort::Value xyz3_tensor = Ort::Value::CreateTensor<float>(xyz3_memory, xyz3_values.data(), xyz3_size, xyz3_dims.data(), xyz3_dims.size());const size_t xyz_size = 1 * input_points[3]->size() * 3;std::vector<float> xyz_values(xyz_size);for (size_t i = 0; i < input_points[3]->size(); i++){xyz_values[3 * i + 0] = input_points[3]->points[i].x;xyz_values[3 * i + 1] = input_points[3]->points[i].y;xyz_values[3 * i + 2] = input_points[3]->points[i].z;}std::vector<int64_t> xyz_dims = { 1, (int64_t)input_points[3]->size(), 3 };auto xyz_memory = Ort::MemoryInfo::CreateCpu(OrtArenaAllocator, OrtMemTypeDefault);Ort::Value xyz_tensor = Ort::Value::CreateTensor<float>(xyz_memory, xyz_values.data(), xyz_size, xyz_dims.data(), xyz_dims.size());const size_t neigh_idx1_size = 1 * input_neighbors[0].size() * 16;std::vector<int64_t> neigh_idx1_values(neigh_idx1_size);for (size_t i = 0; i < input_neighbors[0].size(); i++){for (size_t j = 0; j < 16; j++){neigh_idx1_values[16 * i + j] = input_neighbors[0][i][j];}}std::vector<int64_t> neigh_idx1_dims = { 1, (int64_t)input_neighbors[0].size(), 16 };auto neigh_idx1_memory = Ort::MemoryInfo::CreateCpu(OrtArenaAllocator, OrtMemTypeDefault);Ort::Value neigh_idx1_tensor = Ort::Value::CreateTensor<int64_t>(neigh_idx1_memory, neigh_idx1_values.data(), neigh_idx1_size, neigh_idx1_dims.data(), neigh_idx1_dims.size());const size_t neigh_idx2_size = 1 * input_neighbors[1].size() * 16;std::vector<int64_t> neigh_idx2_values(neigh_idx2_size);for (size_t i = 0; i < input_neighbors[1].size(); i++){for (size_t j = 0; j < 16; j++){neigh_idx2_values[16 * i + j] = input_neighbors[1][i][j];}}std::vector<int64_t> neigh_idx2_dims = { 1, (int64_t)input_neighbors[1].size(), 16 };auto neigh_idx2_memory = Ort::MemoryInfo::CreateCpu(OrtArenaAllocator, OrtMemTypeDefault);Ort::Value neigh_idx2_tensor = Ort::Value::CreateTensor<int64_t>(neigh_idx2_memory, neigh_idx2_values.data(), neigh_idx2_size, neigh_idx2_dims.data(), neigh_idx2_dims.size());const size_t neigh_idx3_size = 1 * input_neighbors[2].size() * 16;std::vector<int64_t> neigh_idx3_values(neigh_idx3_size);for (size_t i = 0; i < input_neighbors[2].size(); i++){for (size_t j = 0; j < 16; j++){neigh_idx3_values[16 * i + j] = input_neighbors[2][i][j];}}std::vector<int64_t> neigh_idx3_dims = { 1, (int64_t)input_neighbors[2].size(), 16 };auto neigh_idx3_memory = Ort::MemoryInfo::CreateCpu(OrtArenaAllocator, OrtMemTypeDefault);Ort::Value neigh_idx3_tensor = Ort::Value::CreateTensor<int64_t>(neigh_idx3_memory, neigh_idx3_values.data(), neigh_idx3_size, neigh_idx3_dims.data(), neigh_idx3_dims.size());const size_t neigh_idx_size = 1 * input_neighbors[3].size() * 16;std::vector<int64_t> neigh_idx_values(neigh_idx_size);for (size_t i = 0; i < input_neighbors[3].size(); i++){for (size_t j = 0; j < 16; j++){neigh_idx_values[16 * i + j] = input_neighbors[3][i][j];}}std::vector<int64_t> neigh_idx_dims = { 1, (int64_t)input_neighbors[3].size(), 16 };auto neigh_idx_memory = Ort::MemoryInfo::CreateCpu(OrtArenaAllocator, OrtMemTypeDefault);Ort::Value neigh_idx_tensor = Ort::Value::CreateTensor<int64_t>(neigh_idx_memory, neigh_idx_values.data(), neigh_idx_size, neigh_idx_dims.data(), neigh_idx_dims.size());const size_t sub_idx8_size = 1 * input_pools[0].size() * 16;std::vector<int64_t> sub_idx8_values(sub_idx8_size);for (size_t i = 0; i < input_pools[0].size(); i++){for (size_t j = 0; j < 16; j++){sub_idx8_values[16 * i + j] = input_pools[0][i][j];}}std::vector<int64_t> sub_idx8_dims = { 1, (int64_t)input_pools[0].size(), 16 };auto sub_idx8_memory = Ort::MemoryInfo::CreateCpu(OrtArenaAllocator, OrtMemTypeDefault);Ort::Value sub_idx8_tensor = Ort::Value::CreateTensor<int64_t>(sub_idx8_memory, sub_idx8_values.data(), sub_idx8_size, sub_idx8_dims.data(), sub_idx8_dims.size());const size_t sub_idx9_size = 1 * input_pools[1].size() * 16;std::vector<int64_t> sub_idx9_values(sub_idx9_size);for (size_t i = 0; i < input_pools[1].size(); i++){for (size_t j = 0; j < 16; j++){sub_idx9_values[16 * i + j] = input_pools[1][i][j];}}std::vector<int64_t> sub_idx9_dims = { 1, (int64_t)input_pools[1].size(), 16 };auto sub_idx9_memory = Ort::MemoryInfo::CreateCpu(OrtArenaAllocator, OrtMemTypeDefault);Ort::Value sub_idx9_tensor = Ort::Value::CreateTensor<int64_t>(sub_idx9_memory, sub_idx9_values.data(), sub_idx9_size, sub_idx9_dims.data(), sub_idx9_dims.size());const size_t sub_idx10_size = 1 * input_pools[2].size() * 16;std::vector<int64_t> sub_idx10_values(sub_idx10_size);for (size_t i = 0; i < input_pools[2].size(); i++){for (size_t j = 0; j < 16; j++){sub_idx10_values[16 * i + j] = input_pools[2][i][j];}}std::vector<int64_t> sub_idx10_dims = { 1, (int64_t)input_pools[2].size(), 16 };auto sub_idx10_memory = Ort::MemoryInfo::CreateCpu(OrtArenaAllocator, OrtMemTypeDefault);Ort::Value sub_idx10_tensor = Ort::Value::CreateTensor<int64_t>(sub_idx10_memory, sub_idx10_values.data(), sub_idx10_size, sub_idx10_dims.data(), sub_idx10_dims.size());const size_t sub_idx11_size = 1 * input_pools[3].size() * 16;std::vector<int64_t> sub_idx11_values(sub_idx11_size);for (size_t i = 0; i < input_pools[3].size(); i++){for (size_t j = 0; j < 16; j++){sub_idx11_values[16 * i + j] = input_pools[3][i][j];}}std::vector<int64_t> sub_idx11_dims = { 1, (int64_t)input_pools[3].size(), 16 };auto sub_idx11_memory = Ort::MemoryInfo::CreateCpu(OrtArenaAllocator, OrtMemTypeDefault);Ort::Value sub_idx11_tensor = Ort::Value::CreateTensor<int64_t>(sub_idx11_memory, sub_idx11_values.data(), sub_idx11_size, sub_idx11_dims.data(), sub_idx11_dims.size());const size_t interp_idx12_size = 1 * input_up_samples[0].size() * 1;std::vector<int64_t> interp_idx12_values(interp_idx12_size);for (size_t i = 0; i < input_up_samples[0].size(); i++){interp_idx12_values[i] = input_up_samples[0][i][0];}std::vector<int64_t> interp_idx12_dims = { 1, (int64_t)input_up_samples[0].size(), 1 };auto interp_idx12_memory = Ort::MemoryInfo::CreateCpu(OrtArenaAllocator, OrtMemTypeDefault);Ort::Value interp_idx12_tensor = Ort::Value::CreateTensor<int64_t>(interp_idx12_memory, interp_idx12_values.data(), interp_idx12_size, interp_idx12_dims.data(), interp_idx12_dims.size());const size_t interp_idx13_size = 1 * input_up_samples[1].size() * 1;std::vector<int64_t> interp_idx13_values(interp_idx13_size);for (size_t i = 0; i < input_up_samples[1].size(); i++){interp_idx13_values[i] = input_up_samples[1][i][0];}std::vector<int64_t> interp_idx13_dims = { 1, (int64_t)input_up_samples[1].size(), 1 };auto interp_idx13_memory = Ort::MemoryInfo::CreateCpu(OrtArenaAllocator, OrtMemTypeDefault);Ort::Value interp_idx13_tensor = Ort::Value::CreateTensor<int64_t>(interp_idx13_memory, interp_idx13_values.data(), interp_idx13_size, interp_idx13_dims.data(), interp_idx13_dims.size());const size_t interp_idx14_size = 1 * input_up_samples[2].size() * 1;std::vector<int64_t> interp_idx14_values(interp_idx14_size);for (size_t i = 0; i < input_up_samples[2].size(); i++){interp_idx14_values[i] = input_up_samples[2][i][0];}std::vector<int64_t> interp_idx14_dims = { 1, (int64_t)input_up_samples[2].size(), 1 };auto interp_idx14_memory = Ort::MemoryInfo::CreateCpu(OrtArenaAllocator, OrtMemTypeDefault);Ort::Value interp_idx14_tensor = Ort::Value::CreateTensor<int64_t>(interp_idx14_memory, interp_idx14_values.data(), interp_idx14_size, interp_idx14_dims.data(), interp_idx14_dims.size());const size_t interp_idx15_size = 1 * input_up_samples[3].size() * 1;std::vector<int64_t> interp_idx15_values(interp_idx15_size);for (size_t i = 0; i < input_up_samples[3].size(); i++){interp_idx15_values[i] = input_up_samples[3][i][0];}std::vector<int64_t> interp_idx15_dims = { 1, (int64_t)input_up_samples[3].size(), 1 };auto interp_idx15_memory = Ort::MemoryInfo::CreateCpu(OrtArenaAllocator, OrtMemTypeDefault);Ort::Value interp_idx15_tensor = Ort::Value::CreateTensor<int64_t>(interp_idx15_memory, interp_idx15_values.data(), interp_idx15_size, interp_idx15_dims.data(), interp_idx15_dims.size());const size_t features_size = 1 * 3 * features->size();std::vector<float> features_values(features_size);for (size_t i = 0; i < features->size(); i++){features_values[features->size() * 0 + i] = features->points[i].x;features_values[features->size() * 1 + i] = features->points[i].y;features_values[features->size() * 2 + i] = features->points[i].z;}std::vector<int64_t> features_dims = { 1, 3, (int64_t)features->size() };auto features_memory = Ort::MemoryInfo::CreateCpu(OrtArenaAllocator, OrtMemTypeDefault);Ort::Value features_tensor = Ort::Value::CreateTensor<float>(features_memory, features_values.data(), features_size, features_dims.data(), features_dims.size());const size_t labels_size = 1 * selected_labels.size();std::vector<int64_t> labels_values(labels_size);for (size_t i = 0; i < selected_labels.size(); i++){labels_values[i] = selected_labels[i];}std::vector<int64_t> labels_dims = { 1, (int64_t)selected_labels.size() };auto labels_memory = Ort::MemoryInfo::CreateCpu(OrtArenaAllocator, OrtMemTypeDefault);Ort::Value labels_tensor = Ort::Value::CreateTensor<int64_t>(labels_memory, labels_values.data(), labels_size, labels_dims.data(), labels_dims.size());std::vector<Ort::Value> inputs;inputs.push_back(std::move(xyz1_tensor));inputs.push_back(std::move(xyz2_tensor));inputs.push_back(std::move(xyz3_tensor));inputs.push_back(std::move(xyz_tensor));inputs.push_back(std::move(neigh_idx1_tensor));inputs.push_back(std::move(neigh_idx2_tensor));inputs.push_back(std::move(neigh_idx3_tensor));inputs.push_back(std::move(neigh_idx_tensor));inputs.push_back(std::move(sub_idx8_tensor));inputs.push_back(std::move(sub_idx9_tensor));inputs.push_back(std::move(sub_idx10_tensor));inputs.push_back(std::move(sub_idx11_tensor));inputs.push_back(std::move(interp_idx12_tensor));inputs.push_back(std::move(interp_idx13_tensor));inputs.push_back(std::move(interp_idx14_tensor));inputs.push_back(std::move(interp_idx15_tensor));inputs.push_back(std::move(features_tensor));inputs.push_back(std::move(labels_tensor));std::vector<Ort::Value> outputs = session.Run(Ort::RunOptions{ nullptr }, input_node_names.data(), inputs.data(), input_node_names.size(), output_node_names.data(), output_node_names.size());const float* output = outputs[18].GetTensorData<float>();std::vector<int64_t> output_dims = outputs[18].GetTensorTypeAndShapeInfo().GetShape(); //1*19*45056size_t count = outputs[18].GetTensorTypeAndShapeInfo().GetElementCount();std::vector<float> pred(output, output + count);std::vector<std::vector<float>> probs(output_dims[2], std::vector<float>(output_dims[1])); //45056*19for (size_t i = 0; i < output_dims[2]; i++){for (size_t j = 0; j < output_dims[1]; j++){probs[i][j] = pred[j * output_dims[2] + i];}}std::vector<int> inds = selected_idx;int c_i = cloud_ind;for (size_t i = 0; i < inds.size(); i++){for (size_t j = 0; j < num_classes; j++){test_probs[inds[i]][j] = test_smooth * test_probs[inds[i]][j] + (1 - test_smooth) * probs[i][j];}}}std::vector<int> pred(test_probs.size());std::fstream output("output.txt", 'w');for (size_t i = 0; i < test_probs.size(); i++){pred[i] = max_element(test_probs[i].begin(), test_probs[i].end()) - test_probs[i].begin() + 1;output << points->points[i].x << " " << points->points[i].y << " " << points->points[i].z << " " << pred[i] << std::endl;}return 0;
}

预测结果:在这里插入图片描述

完整的工程可见:https://github.com/taifyang/RandLA-Net-onnxruntime

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/621938.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

浏览器进程模型和JS的事件循环

一、浏览器的进程模型 1、什么是进程&#xff1f; 程序运行所需要的专属内存空间 2、什么是线程&#xff1f; ​​​​​运行​代码的称为线程&#xff08;同一个进程中的线程共享进程的资源&#xff09; ⼀个进程⾄少有⼀个线程&#xff0c;所以在进程开启后会⾃动创建⼀个线…

【JUC进阶】14. TransmittableThreadLocal

目录 1、前言 2、TransmittableThreadLocal 2.1、使用场景 2.2、基本使用 3、实现原理 4、小结 1、前言 书接上回《【JUC进阶】13. InheritableThreadLocal》&#xff0c;提到了InheritableThreadLocal虽然能进行父子线程的值传递&#xff0c;但是如果在线程池中&#x…

spring-mvc(1):Hello World

虽然目前大多数都是使用springboot来开发java程序&#xff0c;或者使用其来为其他端提供接口&#xff0c;而为其他端提供接口&#xff0c;这些功能都是依靠springmvc实现的&#xff0c;所以有必要学习一下spring-mvc&#xff0c;这样才能更好的学习springboot。 一&#xff0c…

c语言题目之九九乘法表的打印

文章目录 题目一、题目分析二&#xff0c;代码编写三&#xff0c;拓展 题目 用c语言打印九九乘法表 提示&#xff1a;以下是本篇文章正文内容&#xff0c;下面案例可供参考 一、题目分析 在上面图我们假设一个乘法为一个单位&#xff0c;在这里我们可以看到第一行有一行一列&…

掌握WPF控件:熟练常用属性(二)

WPF布局常用控件&#xff08;二&#xff09; Calendar 用于日期选择的控件。它提供了一个可视化的界面&#xff0c;可以通过它来选择特定的日期。 常用属性描述DisplayMode用来设置Calendar的显示模式&#xff0c;有三种可选值&#xff1a;默认Month&#xff08;月&#xff…

自编C++题目——输入程序

预估难度 简单 题目描述 小明编了一个输入程序&#xff0c;当用户的输入之中有<时&#xff0c;光标移动到最右边&#xff1b;当输入有>时&#xff0c;光标移动到最左边&#xff0c;当输入有^时&#xff0c;光标移动到前一个字符&#xff0c;当输入为#时&#xff0c;清…

SLAM第十四讲

基础知识 四元数 先将三维空间的点p(x,y,z) 变成四元数的表示q(0,x,y,z) 其中0为四元数的实部&#xff0c;x,y,z为四元数的虚部。 实部为0的四元数也叫纯虚四元数。 通过 左乘四元数&#xff…

YOLOv8 Ultralytics:使用Ultralytics框架进行SAM图像分割

YOLOv8 Ultralytics&#xff1a;使用Ultralytics框架进行SAM图像分割 前言相关介绍前提条件实验环境安装环境项目地址LinuxWindows 使用Ultralytics框架进行SAM图像分割参考文献 前言 由于本人水平有限&#xff0c;难免出现错漏&#xff0c;敬请批评改正。更多精彩内容&#xf…

TypeScript进阶(四)声明文件

✨ 专栏介绍 TypeScript是一种由微软开发的开源编程语言&#xff0c;它是JavaScript的超集&#xff0c;意味着任何有效的JavaScript代码都是有效的TypeScript代码。TypeScript通过添加静态类型和其他特性来增强JavaScript&#xff0c;使其更适合大型项目和团队开发。 在TypeS…

长亭科技-雷池WAF的安装与使用

目录 1、安装雷池 2、登录雷池 3、简单配置 4、防护测试 5、其他补充 1、安装雷池 在Linux系统上执行如下命令 &#xff08;需要docker环境&#xff0c;提前把docker、docker-compose 装好&#xff09; bash -c "$(curl -fsSLk https://waf-ce.chaitin.cn/release…

【电源专题】案例:不同模块同一个管脚默认状态不一样会导致什么异常?

案例背景:在产品设计中,有时候会兼容两个不同供应商同一个方案的模块。比如两个供应商使用的内部方案都是一样的芯片,封装也是兼容的。但是由于专利、LAYOUT方便、软件开发方便等角度来看,可能会存在不同模块供应商的同一个PIN脚对应的芯片内部的管脚不一样。管脚不一样那么…

java基础知识点系列——分支语句(六)

java基础知识点系列——分支语句&#xff08;六&#xff09; 流程控制 流程控制语句分类 顺序结构分支结构循环结构 顺序结构 顺序结构是程序中最简单最基本的流程控制&#xff0c;没有特定的语法结构&#xff0c;按照代码的先后顺序&#xff0c;依次执行。 if语句 if语…

39岁学JAVA来得及吗?

39岁学JAVA来得及吗? 在开始前我有一些资料&#xff0c;是我根据网友给的问题精心整理了一份「Java的资料从专业入门到高级教程」&#xff0c; 点个关注在评论区回复“888”之后私信回复“888”&#xff0c;全部无偿共享给大家&#xff01;&#xff01;&#xff01;学习Java编…

五种嵌入式经典通信总线协议

一.先前知识 1.并行与串行 并行通信和串行通信是两种不同的数据传输方式&#xff1a; 并行通信&#xff1a;并行通信是指在同一时间使用多条并行传输的线路传输多个比特的数据。每个比特使用独立的线路进行传输&#xff0c;同时进行。这样可以在一个时钟周期内传输多个比特&…

螺纹钢负公差轧制中的测径仪应用

1、负公差轧制意义 为了满足生产使用要求&#xff0c;并根据轧制水平&#xff0c;在产品标准冲规定钢材尺寸的波动范围&#xff0c;允许钢材的实际尺寸与公称尺之间有一定的偏差&#xff0c;这个偏差一般称公差&#xff0c;公差分正、负公差&#xff0c;钢材按负公差轧制时&…

02.neuvector之Enforcer容器功能介绍

原文链接 一、功能介绍 Enforcer容器在neuvector中主要负责网络与DLP/WAF的规则策略的实现以及网络数据的采集上报&#xff1b; 以DaemonSet的方式运行&#xff0c;主要有三个进程monitor、agent、dp&#xff1b;进程分别主要职责如下&#xff1a; monitor&#xff1a;负责监…

[SpringAop + Logback +MDC] 现网必备全链路日志追踪

缘起&#xff1a;前几天有个粉丝私信&#xff0c;想了解现网环境如果出现问题&#xff0c;怎么快速定位。可能有些小伙伴这时候就会脱口而出&#xff0c;直接去看log 呗&#xff0c;有什么好说的。 但是&#xff0c;众所周知&#xff0c;后端服务面向的前端应用是多种多样的&am…

Shiro框架:Shiro登录认证流程源码解析

目录 1.用户登录认证流程 1.1 生成认证Token 1.2 用户登录认证 1.2.1 SecurityManager login流程解析 1.2.1.1 authenticate方法进行登录认证 1.2.1.1.1 单Realm认证 1.2.1.2 认证通过后创建登录用户对象 1.2.1.2.1 复制SubjectContext 1.2.1.2.2 对subjectContext设…

二、MySQL安装

目录 1、双击mysql8的安装向导 2、分为首次安装和再安装 1&#xff09;、首次安装 &#xff08;1&#xff09;如果是首次安装mysql系列的产品&#xff0c;需要先安装mysql产品的安装向导 &#xff08;2&#xff09;选择安装模式 2&#xff09;、不是首次安装 &#xff0…

学会这个技巧,制作电子杂志SOEASY

​电子杂志是一种非常流行的传播方式&#xff0c;它能够以更加生动、直观的方式展示你的品牌和产品。通过电子杂志&#xff0c;你可以将文字、图片、视频等多种元素有机地结合起来&#xff0c;创造出令人难忘的视觉效果。 如果你想制作一本电子杂志&#xff0c;但不知道从何入…