Pytorch实现RNN预测模型并使用C++相应的ONNX模型推理

Pytorch实现RNN模型

代码

import torch
import torch.nn as nnclass RNN(nn.Module):def __init__(self, seq_len, input_size, hidden_size, output_size, num_layers, device):super(RNN, self).__init__()self._seq_len = seq_lenself._input_size = input_sizeself._output_size = output_sizeself._hidden_size = hidden_sizeself._device = deviceself._num_layers = num_layersself.rnn = nn.RNN(input_size=input_size,hidden_size=self._hidden_size,num_layers=self._num_layers,batch_first=True)self.fc = nn.Linear(self._seq_len * self._hidden_size, self._output_size)def forward(self, x, hidden_prev):out, hidden_prev = self.rnn(x, hidden_prev)out = out.contiguous().view(out.shape[0], -1)out = self.fc(out)return out, hidden_prevseq_len = 10
batch_size = 20
input_size = 10
output_size = 10
hidden_size = 32
num_layers = 2
model = RNN(seq_len, input_size, hidden_size, output_size, num_layers, "cpu")
hidden_prev = torch.zeros(num_layers, batch_size, hidden_size).to("cpu")
model.eval() input_names = ["input", "hidden_prev_in"]
output_names  = ["output", "hidden_prev_out"]x = torch.randn((batch_size, seq_len, input_size))
y, hidden_prev = model(x, hidden_prev)
print(x.shape)
print(hidden_prev.shape)
print(y.shape)
print(hidden_prev.shape)torch.onnx.export(model, (x, hidden_prev), 'RNN.onnx', verbose=True, input_names=input_names, output_names=output_names,dynamic_axes={'input':[0], 'hidden_prev_in':[1], 'output':[0], 'hidden_prev_out':[1]} )import onnx
model = onnx.load("RNN.onnx")
print("load model done.")
onnx.checker.check_model(model)
print(onnx.helper.printable_graph(model.graph))
print("check model done.")

运行结果

torch.Size([20, 10, 10])
torch.Size([2, 20, 32])
torch.Size([20, 10])
torch.Size([2, 20, 32])
/home/ubuntu/anaconda3/envs/py37/lib/python3.7/site-packages/torch/onnx/utils.py:2041: UserWarning: No names were found for specified dynamic axes of provided input.Automatically generated names will be applied to each dynamic axes of input input"No names were found for specified dynamic axes of provided input."
/home/ubuntu/anaconda3/envs/py37/lib/python3.7/site-packages/torch/onnx/utils.py:2041: UserWarning: No names were found for specified dynamic axes of provided input.Automatically generated names will be applied to each dynamic axes of input hidden_prev"No names were found for specified dynamic axes of provided input."
/home/ubuntu/anaconda3/envs/py37/lib/python3.7/site-packages/torch/onnx/utils.py:2041: UserWarning: No names were found for specified dynamic axes of provided input.Automatically generated names will be applied to each dynamic axes of input output"No names were found for specified dynamic axes of provided input."
/home/ubuntu/anaconda3/envs/py37/lib/python3.7/site-packages/torch/onnx/symbolic_opset9.py:4322: UserWarning: Exporting a model to ONNX with a batch_size other than 1, with a variable length with RNN_TANH can cause an error when running the ONNX model with a different batch size. Make sure to save the model with a batch size of 1, or define the initial states (h0/c0) as inputs of the model. + "or define the initial states (h0/c0) as inputs of the model. "
/home/ubuntu/anaconda3/envs/py37/lib/python3.7/site-packages/torch/onnx/_internal/jit_utils.py:258: UserWarning: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. (Triggered internally at ../torch/csrc/jit/passes/onnx/shape_type_inference.cpp:1884.)_C._jit_pass_onnx_node_shape_type_inference(node, params_dict, opset_version)
/home/ubuntu/anaconda3/envs/py37/lib/python3.7/site-packages/torch/onnx/utils.py:688: UserWarning: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. (Triggered internally at ../torch/csrc/jit/passes/onnx/shape_type_inference.cpp:1884.)graph, params_dict, GLOBALS.export_onnx_opset_version
/home/ubuntu/anaconda3/envs/py37/lib/python3.7/site-packages/torch/onnx/utils.py:1179: UserWarning: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. (Triggered internally at ../torch/csrc/jit/passes/onnx/shape_type_inference.cpp:1884.)graph, params_dict, GLOBALS.export_onnx_opset_version
Exported graph: graph(%input : Float(*, 10, 10, strides=[100, 10, 1], requires_grad=0, device=cpu),%hidden_prev.1 : Float(2, *, 32, strides=[640, 32, 1], requires_grad=1, device=cpu),%fc.weight : Float(10, 320, strides=[320, 1], requires_grad=1, device=cpu),%fc.bias : Float(10, strides=[1], requires_grad=1, device=cpu),%onnx::RNN_58 : Float(1, 32, 10, strides=[320, 10, 1], requires_grad=0, device=cpu),%onnx::RNN_59 : Float(1, 32, 32, strides=[1024, 32, 1], requires_grad=0, device=cpu),%onnx::RNN_60 : Float(1, 64, strides=[64, 1], requires_grad=0, device=cpu),%onnx::RNN_62 : Float(1, 32, 32, strides=[1024, 32, 1], requires_grad=0, device=cpu),%onnx::RNN_63 : Float(1, 32, 32, strides=[1024, 32, 1], requires_grad=0, device=cpu),%onnx::RNN_64 : Float(1, 64, strides=[64, 1], requires_grad=0, device=cpu)):%/rnn/Transpose_output_0 : Float(10, *, 10, device=cpu) = onnx::Transpose[perm=[1, 0, 2], onnx_name="/rnn/Transpose"](%input), scope: __main__.RNN::/torch.nn.modules.rnn.RNN::rnn # /home/ubuntu/anaconda3/envs/py37/lib/python3.7/site-packages/torch/nn/modules/rnn.py:478:0%onnx::RNN_13 : Tensor? = prim::Constant(), scope: __main__.RNN::/torch.nn.modules.rnn.RNN::rnn # /home/ubuntu/anaconda3/envs/py37/lib/python3.7/site-packages/torch/nn/modules/rnn.py:478:0%/rnn/Constant_output_0 : Long(1, strides=[1], device=cpu) = onnx::Constant[value={0}, onnx_name="/rnn/Constant"](), scope: __main__.RNN::/torch.nn.modules.rnn.RNN::rnn # /home/ubuntu/anaconda3/envs/py37/lib/python3.7/site-packages/torch/nn/modules/rnn.py:478:0%/rnn/Constant_1_output_0 : Long(1, strides=[1], device=cpu) = onnx::Constant[value={0}, onnx_name="/rnn/Constant_1"](), scope: __main__.RNN::/torch.nn.modules.rnn.RNN::rnn # /home/ubuntu/anaconda3/envs/py37/lib/python3.7/site-packages/torch/nn/modules/rnn.py:478:0%/rnn/Constant_2_output_0 : Long(1, strides=[1], device=cpu) = onnx::Constant[value={1}, onnx_name="/rnn/Constant_2"](), scope: __main__.RNN::/torch.nn.modules.rnn.RNN::rnn # /home/ubuntu/anaconda3/envs/py37/lib/python3.7/site-packages/torch/nn/modules/rnn.py:478:0%/rnn/Slice_output_0 : Float(1, *, 32, device=cpu) = onnx::Slice[onnx_name="/rnn/Slice"](%hidden_prev.1, %/rnn/Constant_1_output_0, %/rnn/Constant_2_output_0, %/rnn/Constant_output_0), scope: __main__.RNN::/torch.nn.modules.rnn.RNN::rnn # /home/ubuntu/anaconda3/envs/py37/lib/python3.7/site-packages/torch/nn/modules/rnn.py:478:0%/rnn/RNN_output_0 : Float(10, 1, *, 32, device=cpu), %/rnn/RNN_output_1 : Float(1, *, 32, device=cpu) = onnx::RNN[activations=["Tanh"], hidden_size=32, onnx_name="/rnn/RNN"](%/rnn/Transpose_output_0, %onnx::RNN_58, %onnx::RNN_59, %onnx::RNN_60, %onnx::RNN_13, %/rnn/Slice_output_0), scope: __main__.RNN::/torch.nn.modules.rnn.RNN::rnn # /home/ubuntu/anaconda3/envs/py37/lib/python3.7/site-packages/torch/nn/modules/rnn.py:478:0%/rnn/Constant_3_output_0 : Long(1, strides=[1], device=cpu) = onnx::Constant[value={1}, onnx_name="/rnn/Constant_3"](), scope: __main__.RNN::/torch.nn.modules.rnn.RNN::rnn # /home/ubuntu/anaconda3/envs/py37/lib/python3.7/site-packages/torch/nn/modules/rnn.py:478:0%/rnn/Squeeze_output_0 : Float(10, *, 32, device=cpu) = onnx::Squeeze[onnx_name="/rnn/Squeeze"](%/rnn/RNN_output_0, %/rnn/Constant_3_output_0), scope: __main__.RNN::/torch.nn.modules.rnn.RNN::rnn # /home/ubuntu/anaconda3/envs/py37/lib/python3.7/site-packages/torch/nn/modules/rnn.py:478:0%/rnn/Constant_4_output_0 : Long(1, strides=[1], device=cpu) = onnx::Constant[value={0}, onnx_name="/rnn/Constant_4"](), scope: __main__.RNN::/torch.nn.modules.rnn.RNN::rnn # /home/ubuntu/anaconda3/envs/py37/lib/python3.7/site-packages/torch/nn/modules/rnn.py:478:0%/rnn/Constant_5_output_0 : Long(1, strides=[1], device=cpu) = onnx::Constant[value={1}, onnx_name="/rnn/Constant_5"](), scope: __main__.RNN::/torch.nn.modules.rnn.RNN::rnn # /home/ubuntu/anaconda3/envs/py37/lib/python3.7/site-packages/torch/nn/modules/rnn.py:478:0%/rnn/Constant_6_output_0 : Long(1, strides=[1], device=cpu) = onnx::Constant[value={2}, onnx_name="/rnn/Constant_6"](), scope: __main__.RNN::/torch.nn.modules.rnn.RNN::rnn # /home/ubuntu/anaconda3/envs/py37/lib/python3.7/site-packages/torch/nn/modules/rnn.py:478:0%/rnn/Slice_1_output_0 : Float(1, *, 32, device=cpu) = onnx::Slice[onnx_name="/rnn/Slice_1"](%hidden_prev.1, %/rnn/Constant_5_output_0, %/rnn/Constant_6_output_0, %/rnn/Constant_4_output_0), scope: __main__.RNN::/torch.nn.modules.rnn.RNN::rnn # /home/ubuntu/anaconda3/envs/py37/lib/python3.7/site-packages/torch/nn/modules/rnn.py:478:0%/rnn/RNN_1_output_0 : Float(10, 1, *, 32, device=cpu), %/rnn/RNN_1_output_1 : Float(1, *, 32, device=cpu) = onnx::RNN[activations=["Tanh"], hidden_size=32, onnx_name="/rnn/RNN_1"](%/rnn/Squeeze_output_0, %onnx::RNN_62, %onnx::RNN_63, %onnx::RNN_64, %onnx::RNN_13, %/rnn/Slice_1_output_0), scope: __main__.RNN::/torch.nn.modules.rnn.RNN::rnn # /home/ubuntu/anaconda3/envs/py37/lib/python3.7/site-packages/torch/nn/modules/rnn.py:478:0%/rnn/Constant_7_output_0 : Long(1, strides=[1], device=cpu) = onnx::Constant[value={1}, onnx_name="/rnn/Constant_7"](), scope: __main__.RNN::/torch.nn.modules.rnn.RNN::rnn # /home/ubuntu/anaconda3/envs/py37/lib/python3.7/site-packages/torch/nn/modules/rnn.py:478:0%/rnn/Squeeze_1_output_0 : Float(10, *, 32, device=cpu) = onnx::Squeeze[onnx_name="/rnn/Squeeze_1"](%/rnn/RNN_1_output_0, %/rnn/Constant_7_output_0), scope: __main__.RNN::/torch.nn.modules.rnn.RNN::rnn # /home/ubuntu/anaconda3/envs/py37/lib/python3.7/site-packages/torch/nn/modules/rnn.py:478:0%/rnn/Transpose_1_output_0 : Float(*, 10, 32, strides=[320, 32, 1], requires_grad=1, device=cpu) = onnx::Transpose[perm=[1, 0, 2], onnx_name="/rnn/Transpose_1"](%/rnn/Squeeze_1_output_0), scope: __main__.RNN::/torch.nn.modules.rnn.RNN::rnn # /home/ubuntu/anaconda3/envs/py37/lib/python3.7/site-packages/torch/nn/modules/rnn.py:478:0%hidden_prev : Float(2, *, 32, strides=[640, 32, 1], requires_grad=1, device=cpu) = onnx::Concat[axis=0, onnx_name="/rnn/Concat"](%/rnn/RNN_output_1, %/rnn/RNN_1_output_1), scope: __main__.RNN::/torch.nn.modules.rnn.RNN::rnn # /home/ubuntu/anaconda3/envs/py37/lib/python3.7/site-packages/torch/nn/modules/rnn.py:478:0%/Shape_output_0 : Long(3, strides=[1], device=cpu) = onnx::Shape[onnx_name="/Shape"](%/rnn/Transpose_1_output_0), scope: __main__.RNN:: # /zengli/20230320/ao/test/test_onnx_rnn.py:25:0%/Constant_output_0 : Long(device=cpu) = onnx::Constant[value={0}, onnx_name="/Constant"](), scope: __main__.RNN:: # /zengli/20230320/ao/test/test_onnx_rnn.py:25:0%/Gather_output_0 : Long(device=cpu) = onnx::Gather[axis=0, onnx_name="/Gather"](%/Shape_output_0, %/Constant_output_0), scope: __main__.RNN:: # /zengli/20230320/ao/test/test_onnx_rnn.py:25:0%onnx::Unsqueeze_50 : Long(1, strides=[1], device=cpu) = onnx::Constant[value={0}]()%/Unsqueeze_output_0 : Long(1, strides=[1], device=cpu) = onnx::Unsqueeze[onnx_name="/Unsqueeze"](%/Gather_output_0, %onnx::Unsqueeze_50), scope: __main__.RNN::%/Constant_1_output_0 : Long(1, strides=[1], requires_grad=0, device=cpu) = onnx::Constant[value={-1}, onnx_name="/Constant_1"](), scope: __main__.RNN::%/Concat_output_0 : Long(2, strides=[1], device=cpu) = onnx::Concat[axis=0, onnx_name="/Concat"](%/Unsqueeze_output_0, %/Constant_1_output_0), scope: __main__.RNN:: # /zengli/20230320/ao/test/test_onnx_rnn.py:25:0%/Reshape_output_0 : Float(*, *, strides=[320, 1], requires_grad=1, device=cpu) = onnx::Reshape[allowzero=0, onnx_name="/Reshape"](%/rnn/Transpose_1_output_0, %/Concat_output_0), scope: __main__.RNN:: # /zengli/20230320/ao/test/test_onnx_rnn.py:25:0%output : Float(*, 10, strides=[10, 1], requires_grad=1, device=cpu) = onnx::Gemm[alpha=1., beta=1., transB=1, onnx_name="/fc/Gemm"](%/Reshape_output_0, %fc.weight, %fc.bias), scope: __main__.RNN::/torch.nn.modules.linear.Linear::fc # /home/ubuntu/anaconda3/envs/py37/lib/python3.7/site-packages/torch/nn/modules/linear.py:114:0return (%output, %hidden_prev)load model done.
graph torch_jit (%input[FLOAT, input_dynamic_axes_1x10x10]%hidden_prev.1[FLOAT, 2xhidden_prev.1_dim_1x32]
) initializers (%fc.weight[FLOAT, 10x320]%fc.bias[FLOAT, 10]%onnx::RNN_58[FLOAT, 1x32x10]%onnx::RNN_59[FLOAT, 1x32x32]%onnx::RNN_60[FLOAT, 1x64]%onnx::RNN_62[FLOAT, 1x32x32]%onnx::RNN_63[FLOAT, 1x32x32]%onnx::RNN_64[FLOAT, 1x64]
) {%/rnn/Transpose_output_0 = Transpose[perm = [1, 0, 2]](%input)%/rnn/Constant_output_0 = Constant[value = <Tensor>]()%/rnn/Constant_1_output_0 = Constant[value = <Tensor>]()%/rnn/Constant_2_output_0 = Constant[value = <Tensor>]()%/rnn/Slice_output_0 = Slice(%hidden_prev.1, %/rnn/Constant_1_output_0, %/rnn/Constant_2_output_0, %/rnn/Constant_output_0)%/rnn/RNN_output_0, %/rnn/RNN_output_1 = RNN[activations = ['Tanh'], hidden_size = 32](%/rnn/Transpose_output_0, %onnx::RNN_58, %onnx::RNN_59, %onnx::RNN_60, %, %/rnn/Slice_output_0)%/rnn/Constant_3_output_0 = Constant[value = <Tensor>]()%/rnn/Squeeze_output_0 = Squeeze(%/rnn/RNN_output_0, %/rnn/Constant_3_output_0)%/rnn/Constant_4_output_0 = Constant[value = <Tensor>]()%/rnn/Constant_5_output_0 = Constant[value = <Tensor>]()%/rnn/Constant_6_output_0 = Constant[value = <Tensor>]()%/rnn/Slice_1_output_0 = Slice(%hidden_prev.1, %/rnn/Constant_5_output_0, %/rnn/Constant_6_output_0, %/rnn/Constant_4_output_0)%/rnn/RNN_1_output_0, %/rnn/RNN_1_output_1 = RNN[activations = ['Tanh'], hidden_size = 32](%/rnn/Squeeze_output_0, %onnx::RNN_62, %onnx::RNN_63, %onnx::RNN_64, %, %/rnn/Slice_1_output_0)%/rnn/Constant_7_output_0 = Constant[value = <Tensor>]()%/rnn/Squeeze_1_output_0 = Squeeze(%/rnn/RNN_1_output_0, %/rnn/Constant_7_output_0)%/rnn/Transpose_1_output_0 = Transpose[perm = [1, 0, 2]](%/rnn/Squeeze_1_output_0)%hidden_prev = Concat[axis = 0](%/rnn/RNN_output_1, %/rnn/RNN_1_output_1)%/Shape_output_0 = Shape(%/rnn/Transpose_1_output_0)%/Constant_output_0 = Constant[value = <Scalar Tensor []>]()%/Gather_output_0 = Gather[axis = 0](%/Shape_output_0, %/Constant_output_0)%onnx::Unsqueeze_50 = Constant[value = <Tensor>]()%/Unsqueeze_output_0 = Unsqueeze(%/Gather_output_0, %onnx::Unsqueeze_50)%/Constant_1_output_0 = Constant[value = <Tensor>]()%/Concat_output_0 = Concat[axis = 0](%/Unsqueeze_output_0, %/Constant_1_output_0)%/Reshape_output_0 = Reshape[allowzero = 0](%/rnn/Transpose_1_output_0, %/Concat_output_0)%output = Gemm[alpha = 1, beta = 1, transB = 1](%/Reshape_output_0, %fc.weight, %fc.bias)return %output, %hidden_prev
}
check model done.

C++调用ONNX

代码

vector<float> testOnnxRNN() {//设置为VERBOSE,方便控制台输出时看到是使用了cpu还是gpu执行//Ort::Env env(ORT_LOGGING_LEVEL_VERBOSE, "test");Ort::Env env(ORT_LOGGING_LEVEL_WARNING, "Default");Ort::SessionOptions session_options;session_options.SetIntraOpNumThreads(5); // 使用五个线程执行op,提升速度// 第二个参数代表GPU device_id = 0,注释这行就是cpu执行//OrtSessionOptionsAppendExecutionProvider_CUDA(session_options, 0);session_options.SetGraphOptimizationLevel(GraphOptimizationLevel::ORT_ENABLE_ALL);#ifdef _WIN32const wchar_t* model_path = L"C:\\Users\\xxx\\Desktop\\RNN.onnx";#elseconst char* model_path = "C:\\Users\\xxx\\Desktop\\RNN.onnx";#endifwprintf(L"%s\n", model_path);Ort::Session session(env, model_path, session_options);Ort::AllocatorWithDefaultOptions allocator;size_t num_input_nodes = session.GetInputCount();size_t num_output_nodes = session.GetOutputCount();std::vector<const char*> input_node_names = { "input" , "hidden_prev_in" }; std::vector<const char*> output_node_names = { "output" , "hidden_prev_out" };const int input_size = 10;const int output_size = 10;const int batch_size = 1;const int seq_len = 10;const int num_layers = 2;const int hidden_size = 32;std::vector<int64_t> input_node_dims = { batch_size, seq_len, input_size };size_t input_tensor_size = batch_size * seq_len * input_size;std::vector<float> input_tensor_values(input_tensor_size);for (unsigned int i = 0; i < input_tensor_size; i++) {input_tensor_values[i] = (float)i / (input_tensor_size + 1);}auto memory_info = Ort::MemoryInfo::CreateCpu(OrtArenaAllocator, OrtMemTypeDefault);Ort::Value input_tensor = Ort::Value::CreateTensor<float>(memory_info, input_tensor_values.data(), input_tensor_size, input_node_dims.data(), 3);assert(input_tensor.IsTensor());std::vector<int64_t> hidden_prev_in_node_dims = { num_layers, batch_size, hidden_size };size_t hidden_prev_in_tensor_size = num_layers * batch_size * hidden_size;std::vector<float> hidden_prev_in_tensor_values(hidden_prev_in_tensor_size);for (unsigned int i = 0; i < hidden_prev_in_tensor_size; i++) {hidden_prev_in_tensor_values[i] = (float)i / (hidden_prev_in_tensor_size + 1);}auto mask_memory_info = Ort::MemoryInfo::CreateCpu(OrtArenaAllocator, OrtMemTypeDefault);Ort::Value hidden_prev_in_tensor = Ort::Value::CreateTensor<float>(mask_memory_info, hidden_prev_in_tensor_values.data(), hidden_prev_in_tensor_size, hidden_prev_in_node_dims.data(), 3);assert(hidden_prev_in_tensor.IsTensor());std::vector<Ort::Value> ort_inputs;ort_inputs.push_back(std::move(input_tensor));ort_inputs.push_back(std::move(hidden_prev_in_tensor));vector<float> ret;try{auto output_tensors = session.Run(Ort::RunOptions{ nullptr }, input_node_names.data(), ort_inputs.data(), ort_inputs.size(), output_node_names.data(), 2);float* output = output_tensors[0].GetTensorMutableData<float>();float* hidden_prev_out = output_tensors[1].GetTensorMutableData<float>();// outputfor (int i = 0; i < output_size; i++) {ret.emplace_back(output[i]);std::cout << output[i] << " ";}std::cout << "\n";// hidden_prev_out//for (int i = 0; i < num_layers * batch_size * hidden_size; i++) {//    std::cout << hidden_prev_out[i] << "\t";//}//std::cout << "\n";}catch (const std::exception& e){std::cout << e.what() << std::endl;}return ret;
}

运行结果

C:\Users\xxx\Desktop\RNN.onnx
0.00296116 0.104443 -0.104239 0.249864 -0.155839 0.019295 0.0458037 -0.0596341 -0.129019 -0.014682

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/84546.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

Network: use `--host` to expose

vite 启动项目提示 Network: use --host to expose 同事不能通过本地IP地址访问项目 解决方案&#xff1a;package.json中启动命令配置本地IP地址 vite --host 192.168.200.252

掌握ls命令:完整指南、高级用法与常见问题解答 | 理解文件管理的关键工具

文章目录 引言1.1 关于ls命令1.2 ls命令的作用和用途 ls命令的基本用法2.1 命令格式和语法2.2 列出当前目录内容2.3 列出指定目录内容 常用选项和参数3.1 列出详细信息3.2 列出隐藏文件3.3 按不同方式排序3.4 显示文件大小3.5 递归列出子目录内容 文件类型和权限4.1 文件类型的…

Twitter图片数据优化的细节

Twitter个人数据优化&#xff1a;吸引更多关注和互动 头像照片在Twitter上&#xff0c;头像照片是最快识别一个账号的方法之一。因此&#xff0c;请务必使用公司的标志或与品牌相关的图片。建议尺寸为400x400像素。 为了建立强大的品牌形象和一致性&#xff0c;强烈建议在所有…

虹科教您 | 可实现带宽计量和延迟计算的时间敏感网络测试工具RELY-TSN-LAB操作指南与基本功能测试

1. RELY-TSN-LAB产品概述 时间敏感网络(TSN)能够合并OT和IT世界&#xff0c;这将是真正确保互操作性和标准化的创新性技术。这项技术的有效开发将显著降低设备成本、维护、先进分析服务的无缝集成以及减少对单个供应商的依赖。为了在这些网络中实现确定性&#xff0c;需要控制…

Kotlin Android中错误及异常处理最佳实践

Kotlin Android中错误及异常处理最佳实践 Kotlin在Android开发中的错误处理机制以及其优势 Kotlin具有强大的错误处理功能&#xff1a;Kotlin提供了强大的错误处理功能&#xff0c;使处理错误变得简洁而直接。这个特性帮助开发人员快速识别和解决错误&#xff0c;减少了调试代…

12:STM32---RTC实时时钟

目录 一:时间相关 1:Unix时间戳 2: UTC/GMT 3:时间戳转化 二:BKP 1:简历 2:基本结构 三: RTC 1:简历 2: 框图 3:RTC基本结构 4:RTC操作注意 四:案例 A:读写备份寄存器 1:连接图 2: 步骤 3: 代码 B:实时时钟 1:连接图 2:函数介绍 3:代码 一:时间相关 1:Un…

Linux -- 使用多张gpu卡进行深度学习任务(以tensorflow为例)

在linux系统上进行多gpu卡的深度学习任务 确保已安装最新的 TensorFlow GPU 版本。 import tensorflow as tf print("Num GPUs Available: ", len(tf.config.list_physical_devices(GPU)))1、确保你已经正确安装了tensorflow和相关的GPU驱动&#xff0c;这里可以通…

WebGL中JS与GLSL ES 语言通信,着色器间的数据传输示例:js控制绘制点位

js改变点位&#xff0c;动态传值 <canvas id"canvas"></canvas><!-- 顶点着色器 --><script id"vertexShader" type"x-shader/x-vertex">attribute vec4 a_Position;void main() {// 点位gl_Position a_Position;// 尺…

【数据结构练习】链表面试题集锦二

目录 前言&#xff1a; 1.链表分割 2.相交链表 3.环形链表 4.环形链表 II 前言&#xff1a; 数据结构想要学的好&#xff0c;刷题少不了&#xff0c;我们不仅要多刷题&#xff0c;还要刷好题&#xff01;为此我开启了一个必做好题锦集的系列&#xff0c;每篇大约5题左右。此…

“高级前端开发技术探索路由的使用及Node安装使用“

目录 引言1. Vue路由的使用2. VueNode.js的安装使用总结 引言 在当今互联网时代&#xff0c;前端开发技术日新月异&#xff0c;不断涌现出各种新的框架和工具。作为一名前端开发者&#xff0c;我们需要不断学习和探索新的技术&#xff0c;以提升自己的开发能力。本文将深入探讨…

【C# Programming】值类型、良构类型

值类型 1、值类型 值类型的变量直接包含值。换言之&#xff0c; 变量引用的位置就是值内存中实际存储的位置。 2、引用类型 引用类型的变量存储的是对一个对象实例的引用&#xff08;通常为内存地址)。 复制引用类型的值时&#xff0c;复制的只是引用。这个引用非常小&#xf…

前后端分离毕设项目之产业园区智慧公寓管理系统设计与实现(内含源码+文档+教程)

博主介绍&#xff1a;✌全网粉丝10W,前互联网大厂软件研发、集结硕博英豪成立工作室。专注于计算机相关专业毕业设计项目实战6年之久&#xff0c;选择我们就是选择放心、选择安心毕业✌ &#x1f345;由于篇幅限制&#xff0c;想要获取完整文章或者源码&#xff0c;或者代做&am…

MySQL——函数和流程控制

2023.9.21 函数 含义&#xff1a;一组预先编译好的SQL语句的集合&#xff0c;理解成批处理语句。 提高代码的重用性简化操作减少了编译次数并且减少了和数据库服务器的连接次数&#xff0c;提高了效率 与存储过程的区别&#xff1a; 存储过程&#xff1a;可以有0个返回&am…

短视频抖音账号矩阵系统源码开发者自研(四)

抖音是一款备受欢迎的短视频APP&#xff0c;拥有数亿的用户&#xff0c;其中包括了大量的粉丝。为了让更多的人能够发现和观看到你的视频&#xff0c;抖音SEO是必不可少的一环&#xff0c;特别是对于拥有企业或个人品牌的用户来说。在这个过程中&#xff0c;抖音SEO源码的开源部…

SQL注入脚本编写

文章目录 布尔盲注脚本延时注入脚本 安装xampp&#xff0c;在conf目录下修改它的http配置文件&#xff0c;如下&#xff0c;找到配置文件&#xff1a; 修改配置文件中的默认主页&#xff0c;让xampp能访问phpstudy的www目录&#xff0c;因为xampp的响应速度比phpstudy快得多&am…

Linux C 网络基础

为什么需要网络通信&#xff1f; 进程间通信解决的是本机内通信 网络通信解决的是任意不同机器的通信 实现网络通信需要哪些支持 1.通信设备&#xff1a;网卡&#xff08;PC机自带&#xff09;&#xff1b; 路由器和交换机&#xff1b; 光纤…

在Scrapy框架中使用隧道代理

今天我要和大家分享一些实战经验&#xff0c;教你如何在Scrapy框架中使用隧道代理。如果你是一个热爱网络爬虫的开发者&#xff0c;或者对数据抓取和处理感兴趣&#xff0c;那么这篇文章将帮助你走上更高级的爬虫之路。 首先&#xff0c;让我们简单介绍一下Scrapy框架。Scrapy…

OpenCV自学笔记八:几何变换

1. 缩放&#xff08;Scale&#xff09;&#xff1a; 缩放是指改变图像的尺寸大小。在OpenCV中&#xff0c;可以使用cv2.resize()函数来实现图像的缩放操作。该函数接受源图像、目标图像大小以及插值方法作为参数。 示例代码&#xff1a;i mport cv2# 读取图像image cv2.imr…

【计算机网络】——应用层

// 图片取自王道 仅做交流学习 一、基本概念 应用层概述 协议是 网络层次模型 中多台主机之间 同层之间进行通信的规则。是一个水平概念 垂直空间上&#xff0c;向下屏蔽下层细节&#xff0c;向上提供服务接入&#xff0c;多台主机之间同层之间形成一条逻辑信道。 应用层的…

编译ctk源码

目录 前景介绍 下载The Common Toolkit (CTK) cmake-gui编译 vs2019生成 debug版本 release版本 前景介绍 CTK&#xff08;Common Toolkit&#xff09;是一个用于医学图像处理和可视化应用程序开发的工具集&#xff0c;具有以下特点&#xff1a; 基于开源和跨平台的Qt框…