torch::和at:: factory function的差別

torch::和at:: factory function的差別

  • 前言
  • torch::autograd::THPVariable_rand
  • torch::rand_symint
  • at::rand_symint
  • demo
    • torch命名空間
    • at命名空間

前言

>>> import torch
>>> a = torch.rand(3, 4)
>>> a.requires_grad
False
>>> a = torch.rand(3, 4, requires_grad = True)
>>> a.requires_grad
True

在這兩個例子中,torch.rand factory function會根據requires_grad參數生成一個可微或不可微的張量。深入其C++底層,會發現它們調用的其實是torch::at::兩個不同命名空間裡的factory function,本篇將會通過查看源碼和範例程序來了解不同factory function生成的張量有何差別。

torch::autograd::THPVariable_rand

如果使用gdb去查看程式運行的backtrace,可以發現torch::autograd::THPVariable_rand是從Python世界到C++世界後第一個與rand有關的函數。

torch/csrc/autograd/generated/python_torch_functions_0.cpp

static PyObject * THPVariable_rand(PyObject* self_, PyObject* args, PyObject* kwargs)
{HANDLE_TH_ERRORSstatic PythonArgParser parser({"rand(SymIntArrayRef size, *, Generator? generator, DimnameList? names, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=False, bool? requires_grad=False)","rand(SymIntArrayRef size, *, Generator? generator, Tensor out=None, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=False, bool? requires_grad=False)","rand(SymIntArrayRef size, *, Tensor out=None, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=False, bool? requires_grad=False)","rand(SymIntArrayRef size, *, DimnameList? names, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=False, bool? requires_grad=False)",}, /*traceable=*/true);ParsedArgs<8> parsed_args;auto _r = parser.parse(nullptr, args, kwargs, parsed_args);if(_r.has_torch_function()) {return handle_torch_function(_r, nullptr, args, kwargs, THPVariableFunctionsModule, "torch");}switch (_r.idx) {//...case 2: {if (_r.isNone(1)) {// aten::rand(SymInt[] size, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensorconst auto options = TensorOptions().dtype(_r.scalartypeOptional(2)).device(_r.deviceWithDefault(4, torch::tensors::get_default_device())).layout(_r.layoutOptional(3)).requires_grad(_r.toBool(6)).pinned_memory(_r.toBool(5));torch::utils::maybe_initialize_cuda(options);auto dispatch_rand = [](c10::SymIntArrayRef size, at::TensorOptions options) -> at::Tensor {pybind11::gil_scoped_release no_gil;return torch::rand_symint(size, options);};return wrap(dispatch_rand(_r.symintlist(0), options));} else {// aten::rand.out(SymInt[] size, *, Tensor(a!) out) -> Tensor(a!)check_out_type_matches(_r.tensor(1), _r.scalartypeOptional(2),_r.isNone(2), _r.layoutOptional(3),_r.deviceWithDefault(4, torch::tensors::get_default_device()), _r.isNone(4));auto dispatch_rand_out = [](at::Tensor out, c10::SymIntArrayRef size) -> at::Tensor {pybind11::gil_scoped_release no_gil;return at::rand_symint_out(out, size);};return wrap(dispatch_rand_out(_r.tensor(1), _r.symintlist(0)).set_requires_grad(_r.toBool(6)));}}// ...}Py_RETURN_NONE;END_HANDLE_TH_ERRORS
}

我們是以torch.rand(3, 4)的方式調用,也就是只提供了size參數,對照下面四種簽名的API:

    "rand(SymIntArrayRef size, *, Generator? generator, DimnameList? names, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=False, bool? requires_grad=False)","rand(SymIntArrayRef size, *, Generator? generator, Tensor out=None, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=False, bool? requires_grad=False)","rand(SymIntArrayRef size, *, Tensor out=None, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=False, bool? requires_grad=False)","rand(SymIntArrayRef size, *, DimnameList? names, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=False, bool? requires_grad=False)",

當中除了第二種(0-based)簽名的rand函數外都需要額外提供如generatornames等參數。所以此處會進入switch的case 2。

接著檢查第1個(0-based)參數_r.isNone(1),也就是out參數,是否為空:

  • 如果未提供out參數會進入if分支,接著調用torch::rand_symint,回傳可微的at::Tensor

  • 如果提供了out參數則會進入else分支,接著調用at::rand_symint,回傳不可微的at::Tensor

此處未提供out參數,所以會進入if分支。

另外注意函數的第六個參數requires_grad,在if分支是以如下方式被解析,並將此資訊記錄在TensorOptions類型的物件裡:

        const auto options = TensorOptions().dtype(_r.scalartypeOptional(2)).device(_r.deviceWithDefault(4, torch::tensors::get_default_device())).layout(_r.layoutOptional(3)).requires_grad(_r.toBool(6)).pinned_memory(_r.toBool(5));

接著會將TensorOptions物件當作參數傳入torch::rand_symint

          return torch::rand_symint(size, options);

在else分支則會先調用dispatch_rand_out得到at::Tensor

        auto dispatch_rand_out = [](at::Tensor out, c10::SymIntArrayRef size) -> at::Tensor {pybind11::gil_scoped_release no_gil;return at::rand_symint_out(out, size);};

然後再透過set_requires_grad函數讓它變成可微或不可微:

        return wrap(dispatch_rand_out(_r.tensor(1), _r.symintlist(0)).set_requires_grad(_r.toBool(6)));

接著進入torch::rand_symint的源碼來看看它和at::rand_symint的區別。

torch::rand_symint

torch/csrc/autograd/generated/variable_factories.h

inline at::Tensor rand_symint(c10::SymIntArrayRef size, at::TensorOptions options = {}) {at::AutoDispatchBelowADInplaceOrView guard;return autograd::make_variable(at::rand_symint(size, at::TensorOptions(options).requires_grad(c10::nullopt)), /*requires_grad=*/options.requires_grad());
}

可以看到此處是先調用at::rand_symint得到at::Tensor後再調用autograd::make_variable對返回的張量再做一層包裝。

at::Tensor繼承自at::TensorBaseat::TensorBase有一個c10::TensorImpl的成員變數autograd_meta_autograd::make_variable會根據第二個參數requires_grad調用c10::TensorImpl::set_autograd_meta來將autograd_meta_設為空或一個non-trivial的值。如果autograd_meta_非空,回傳的Variable就會被賦予自動微分的功能。

at::rand_symint

build/aten/src/ATen/Functions.h

// aten::rand(SymInt[] size, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensor
inline at::Tensor rand_symint(c10::SymIntArrayRef size, at::TensorOptions options={}) {return at::_ops::rand::call(size, optTypeMetaToScalarType(options.dtype_opt()), options.layout_opt(), options.device_opt(), options.pinned_memory_opt());
}
namespace symint {template <typename T, typename = std::enable_if_t<std::is_same<T, c10::SymInt>::value>>at::Tensor rand(c10::SymIntArrayRef size, at::TensorOptions options={}) {return at::_ops::rand::call(size, optTypeMetaToScalarType(options.dtype_opt()), options.layout_opt(), options.device_opt(), options.pinned_memory_opt());}
}

at::rand_symint函數其實就只是調用at::_ops::rand::call就直接返回。

PYTORCH C++ API - Autograd可以作為印證:

The at::Tensor class in ATen is not differentiable by default. To add the differentiability of tensors the autograd API provides, you must use tensor factory functions from the torch:: namespace instead of the at:: namespace. For example, while a tensor created with at::ones will not be differentiable, a tensor created with torch::ones will be.

at::下的factory function製造出來的張量沒有自動微分功能;如果想讓張量擁有自動微分功能,可以改用torch::下的factory function(但需傳入torch::requires_grad())。

demo

安裝LibTorch後新增一個autograd.cpp,參考AUTOGRAD IN C++ FRONTEND:

#include <torch/torch.h>int main(){torch::Tensor x = torch::ones({2, 2});std::cout << x << std::endl;std::cout << x.requires_grad() << std::endl; // 0x = torch::ones({2, 2}, torch::requires_grad());// 建構時傳入torch::requires_grad(),張量的requires_grad()便會為truestd::cout << x.requires_grad() << std::endl; // 1torch::Tensor y = x.mean();std::cout << y << std::endl;std::cout << y.requires_grad() << std::endl; // 1// 對於非葉子節點,必須事先調用retain_grad(),這樣它在反向傳播時的梯度才會被保留y.retain_grad(); // retain grad for non-leaf Tensory.backward();std::cout << y.grad() << std::endl;std::cout << x.grad() << std::endl;// at命名空間at::Tensor x1 = at::ones({2, 2});std::cout << x1.requires_grad() << std::endl; // 0at::Tensor y1 = x1.mean();std::cout << y1.requires_grad() << std::endl; // 0// y1.retain_grad(); // core dumped// at::Tensor透過set_requires_grad後就可以被微分了x1.set_requires_grad(true);std::cout << "after set requires grad: " << x1.requires_grad() << std::endl; // 1std::cout << y1.requires_grad() << std::endl; // 0// x1改變了之後y1也必須更新y1 = x1.mean();std::cout << y1.requires_grad() << std::endl; // 1y1.retain_grad(); // retain grad for non-leaf Tensory1.backward();std::cout << y1.grad() << std::endl;std::cout << x1.grad() << std::endl;return 0;
}

編寫以下CMakeLists.txt

cmake_minimum_required(VERSION 3.18 FATAL_ERROR)
project(autograd)find_package(Torch REQUIRED)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${TORCH_CXX_FLAGS}")add_executable(autograd autograd.cpp)
target_link_libraries(autograd "${TORCH_LIBRARIES}")
set_property(TARGET autograd PROPERTY CXX_STANDARD 17)# The following code block is suggested to be used on Windows.
# According to https://github.com/pytorch/pytorch/issues/25457,
# the DLLs need to be copied to avoid memory errors.
if (MSVC)file(GLOB TORCH_DLLS "${TORCH_INSTALL_PREFIX}/lib/*.dll")add_custom_command(TARGET autogradPOST_BUILDCOMMAND ${CMAKE_COMMAND} -E copy_if_different${TORCH_DLLS}$<TARGET_FILE_DIR:autograd>)
endif (MSVC)

編譯執行:

rm -rf * && cmake -DCMAKE_PREFIX_PATH=/root/Documents/installation/libtorch .. && make && ./autograd

逐行分析如下。

torch命名空間

使用torch命名空間的factory function創造torch::Tensor

    torch::Tensor x = torch::ones({2, 2});std::cout << x << std::endl;

結果如下:

 1  11  1
[ CPUFloatType{2,2} ]

此處沒傳入torch::requires_grad(),所以張量的 requires_grad()會為false:

    std::cout << x.requires_grad() << std::endl; // 0

如果建構時傳入torch::requires_grad(),張量的requires_grad()便會為true:

    x = torch::ones({2, 2}, torch::requires_grad());std::cout << x.requires_grad() << std::endl; // 1
    torch::Tensor y = x.mean();std::cout << y << std::endl;
    1[ CPUFloatType{} ]
    std::cout << y.requires_grad() << std::endl; // 1

對於非葉子節點,必須事先調用retain_grad(),這樣它在反向傳播時的梯度才會被保留:

	y.retain_grad(); // retain grad for non-leaf Tensory.backward();std::cout << y.grad() << std::endl;
    1[ CPUFloatType{} ]

如果前面沒有y.retain_grad()直接調用y.grad(),將會導致core dumped:

    [W TensorBody.h:489] Warning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-lea
f Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (function grad)           [ Tensor (undefined) ]                                                                                                                                        terminate called after throwing an instance of 'c10::Error'                                                                                                     what():  Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.                                                                                                      Exception raised from unpack at ../torch/csrc/autograd/saved_variable.cpp:136 (most recent call first):                                                       frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7faba17f4d47 in /root/Documents/installation/libtorch/lib/libc10.so)                  frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, char const*) + 0x68 (0x7faba17ae0fc in /root/Documents/installation/libtorch/lib/libc10.so)                                                                                                                                                   frame #2: torch::autograd::SavedVariable::unpack(std::shared_ptr<torch::autograd::Node>) const + 0x13b2 (0x7fab8f87d6c2 in /root/Documents/installation/libtorch/lib/libtorch_cpu.so)                                                        frame #3: torch::autograd::generated::MeanBackward0::apply(std::vector<at::Tensor, std::allocator<at::Tensor> >&&) + 0x98 (0x7fab8eb73998 in /root/Documents/installation/libtorch/lib/libtorch_cpu.so)                            frame #4: <unknown function> + 0x4d068cb (0x7fab8f8428cb in /root/Documents/installation/libtorch/lib/libtorch_cpu.so)frame #5: torch::autograd::Engine::evaluate_function(std::shared_ptr<torch::autograd::GraphTask>&, torch::autograd::Node*, torch::autograd::InputBuffer&, std::shared_ptr<torch::autograd::ReadyQueue> const&) + 0xe8d (0x7fab8f83b94d in /root/Documents/installation/libtorch/lib/libtorch_cpu.so)frame #6: torch::autograd::Engine::thread_main(std::shared_ptr<torch::autograd::GraphTask> const&) + 0x698 (0x7fab8f83cca8 in /root/Documents/installation/libtorch/lib/libtorch_cpu.so)frame #7: torch::autograd::Engine::execute_with_graph_task(std::shared_ptr<torch::autograd::GraphTask> const&, std::shared_ptr<torch::autograd::Node>, torch::autograd::InputBuffer&&) + 0x3dd (0x7fab8f8378bd in /root/Documents/installation/libtorch/lib/libtorch_cpu.so)frame #8: torch::autograd::Engine::execute(std::vector<torch::autograd::Edge, std::allocator<torch::autograd::Edge> > const&, std::vector<at::Tensor, std::allocator<at::Tensor> > const&, bool, bool, bool, std::vector<torch::autograd::Edge, std::allocator<torch::autograd::Edge> > const&) + 0xa26 (0x7fab8f83a546 in /root/Documents/installation/libtorch/lib/libtorch_cpu.so)frame #9: <unknown function> + 0x4ce0e81 (0x7fab8f81ce81 in /root/Documents/installation/libtorch/lib/libtorch_cpu.so)frame #10: torch::autograd::backward(std::vector<at::Tensor, std::allocator<at::Tensor> > const&, std::vector<at::Tensor, std::allocator<at::Tensor> > const&,c10::optional<bool>, bool, std::vector<at::Tensor, std::allocator<at::Tensor> > const&) + 0x5c (0x7fab8f81f88c in /root/Documents/installation/libtorch/lib/libtorch_cpu.so)frame #11: <unknown function> + 0x4d447de (0x7fab8f8807de in /root/Documents/installation/libtorch/lib/libtorch_cpu.so)frame #12: at::Tensor::_backward(c10::ArrayRef<at::Tensor>, c10::optional<at::Tensor> const&, c10::optional<bool>, bool) const + 0x48 (0x7fab8c51b208 in /root/Documents/installation/libtorch/lib/libtorch_cpu.so)frame #13: <unknown function> + 0x798a (0x5638af5ed98a in ./autograd)frame #14: <unknown function> + 0x4d55 (0x5638af5ead55 in ./autograd)frame #15: <unknown function> + 0x29d90 (0x7fab8a6e9d90 in /lib/x86_64-linux-gnu/libc.so.6)frame #16: __libc_start_main + 0x80 (0x7fab8a6e9e40 in /lib/x86_64-linux-gnu/libc.so.6)frame #17: <unknown function> + 0x4985 (0x5638af5ea985 in ./autograd)

繼續看x的梯度:

    std::cout << x.grad() << std::endl;
     0.2500  0.25000.2500  0.2500[ CPUFloatType{2,2} ]

at命名空間

改用at命名空間下的factory function創建張量:

    // at命名空間at::Tensor x1 = at::ones({2, 2});std::cout << x1.requires_grad() << std::endl; // 0

如果我們使用跟torch::ones類似的方式在at::ones裡加入torch::requires_grad()參數會如何呢?結果x1.requires_grad()仍然會是0。回顧at::rand_symint,我們可以猜想這是因為在進一步調用底層函數時只關注options.dtype_optoptions.layout_optoptions.device_optoptions.pinned_memory_opt等四個選項,而忽略options.requires_grad

    at::_ops::rand::call(size, optTypeMetaToScalarType(options.dtype_opt()), options.layout_opt(), options.device_opt(), options.pinned_memory_opt());

定義y1變數,一開始其requires_grad為false:

    at::Tensor y1 = x1.mean();std::cout << y1.requires_grad() << std::endl; // 0

因為此時x1, y1都是不可微的,如果嘗試調用y1.retain_grad()將會導致core dumped:

    terminate called after throwing an instance of 'c10::Error'what():  can't retain_grad on Tensor that has requires_grad=FalseException raised from retain_grad at ../torch/csrc/autograd/variable.cpp:503 (most recent call first):frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f7401f62d47     in /root/Documents/installation/libtorch/lib/libc10.so)frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, char const*) + 0x68 (0x7f7401f1c0fc in /root/Documents/installation/libtorch/lib/libc10.so)frame #2: <unknown function> + 0x4d4751f (0x7f73efff151f in /root/Documents/installation/libtorch/lib/libtorch_cpu.so)frame #3: <unknown function> + 0x4cef (0x560b61ca7cef in ./autograd)frame #4: <unknown function> + 0x29d90 (0x7f73eae57d90 in /lib/x86_64-linux-gnu/libc.so.6)frame #5: __libc_start_main + 0x80 (0x7f73eae57e40 in /lib/x86_64-linux-gnu/libc.so.6)frame #6: <unknown function> + 0x4965 (0x560b61ca7965 in ./autograd)Aborted (core dumped)

如果想要讓它們變成可微的呢?我們可以透過set_requires_grad函數:

    x1.set_requires_grad(true);std::cout << "after set requires grad: " << x1.requires_grad() << std::endl; // 1std::cout << y1.requires_grad() << std::endl; // 0

可以看到這時候y1requires_grad為false,這是因為x1改變了之後y1尚未更新。

透過以下方式更新後y1後,其requires_grad也會變為true:

    y1 = x1.mean();std::cout << y1.requires_grad() << std::endl; // 1

y1.retain_grad();的作用是保留非葉子張量的梯度:

    y1.retain_grad(); // retain grad for non-leaf Tensor

調用該函數的前提是該張量的requires_grad必須為true,如果省略y1 = x1.mean();這一行,因為y1requires_grad為false,所以在y1.retain_grad();時會出現如下錯誤:

    terminate called after throwing an instance of 'c10::Error'what():  can't retain_grad on Tensor that has requires_grad=FalseException raised from retain_grad at ../torch/csrc/autograd/variable.cpp:503 (most recent call first):frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fafd2dfcd47 in /root/Documents/installation/libtorch/lib/libc10.so)frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, char const*) + 0x68 (0x7fafd2db60fc in /root/Documents/installation/libtorch/lib/libc10.so)frame #2: <unknown function> + 0x4d4751f (0x7fafc0e8b51f in /root/Documents/installation/libtorch/lib/libtorch_cpu.so)frame #3: <unknown function> + 0x4f77 (0x55f9a73dff77 in ./autograd)frame #4: <unknown function> + 0x29d90 (0x7fafbbcf1d90 in /lib/x86_64-linux-gnu/libc.so.6)frame #5: __libc_start_main + 0x80 (0x7fafbbcf1e40 in /lib/x86_64-linux-gnu/libc.so.6)frame #6: <unknown function> + 0x4985 (0x55f9a73df985 in ./autograd)Aborted (core dumped)

開始反向傳播,然後查看y1的梯度:

    y1.backward();std::cout << y1.grad() << std::endl;
     1[ CPUFloatType{} ]

如果注釋掉y1.retain_grad();,則y1的梯度不會被保留,只會輸出一個未定義的張量,並出現以下警告:

    [W TensorBody.h:489] Warning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (function grad)[ Tensor (undefined) ]

查看x1的梯度:

    std::cout << x1.grad() << std::endl;

結果與使用torch::時相同:

     0.2500  0.25000.2500  0.2500[ CPUFloatType{2,2} ]

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/179252.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

【问题系列】消费者与MQ连接断开问题解决方案(二)

1. 问题描述 当使用RabbitMQ作为中间件&#xff0c;而消费者为服务时&#xff0c;可能会出现以下情况&#xff1a;在长时间没有消息传递后&#xff0c;消费者与RabbitMQ之间出现连接断开&#xff0c;导致无法处理新消息。解决这一问题的方法是重启Python消费者服务&#xff0c;…

C# 模拟鼠标操作工具类

写在前面 用WinForm做RPA项目时经常需要模拟鼠标操作&#xff0c;通过调用Windows Api 可以实现控制鼠标的移动、点击以及滚轮滚动&#xff0c;做到跟人工一样的操作。 代码实现 public static class MouseKeyController{[DllImport("user32")]private static exte…

【前端】js 多个并行的Promise阻塞工具 指定同时执行数量

“多线程”Promise 工具类 vue that this 或者用 全局变量 map {count:0} //阻塞并获取额度 await WTool.Thread.sleepLimit(that, 变量名称, 500) await WTool.Thread.sleepLimit(map, count, 500) Thread:{/*** 阻塞x毫秒* 使用方法* await sleep&#xff08;5000&…

大数据平台/大数据技术与原理-实验报告--部署ZooKeeper集群和实战ZooKeeper

实验名称 部署ZooKeeper集群和实战ZooKeeper 实验性质 &#xff08;必修、选修&#xff09; 必修 实验类型&#xff08;验证、设计、创新、综合&#xff09; 综合 实验课时 2 实验日期 2023.11.04-2023.11.05 实验仪器设备以及实验软硬件要求 专业实验室&#xff08…

Spring Boot 3.2.0 Tomcat虚拟线程初体验 (部分装配解析)

写在前面 spring boot 3 已经提供了对虚拟线程的支持。 虚拟线程和平台线程主要区别在于&#xff0c;虚拟线程在运行周期内不依赖操作系统线程&#xff1a;它们与硬件脱钩&#xff0c;因此被称为 “虚拟”。这种解耦是由 JVM 提供的抽象层赋予的。 虚拟线程的运行成本远低于平…

如何使用APP UI自动化测试提高测试效率与质量?

pythonappium自动化测试系列就要告一段落了&#xff0c;本篇博客咱们做个小结。 首先想要说明一下&#xff0c;APP自动化测试可能很多公司不用&#xff0c;但也是大部分自动化测试工程师、高级测试工程师岗位招聘信息上要求的&#xff0c;所以为了更好的待遇&#xff0c;我们还…

C++11『右值引用 ‖ 完美转发 ‖ 新增类功能 ‖ 可变参数模板』

✨个人主页&#xff1a; 北 海 &#x1f389;所属专栏&#xff1a; C修行之路 &#x1f383;操作环境&#xff1a; Visual Studio 2022 版本 17.6.5 文章目录 &#x1f307;前言&#x1f3d9;️正文1.右值引用1.1.什么是右值引用&#xff1f;1.2.move 转移资源1.3.左值引用 vs …

石油化工专业MR仿真情景教学演练

首先&#xff0c;MR混合现实情景实训教学系统为学生提供了一个高度仿真的学习环境。在这个环境中&#xff0c;学生可以亲自操作设备&#xff0c;进行模拟实验&#xff0c;甚至可以体验到工业事故的模拟情景&#xff0c;从而更好地理解工艺流程的复杂性和安全性。这种沉浸式的学…

Java中的jvm——面试题+答案(方法区、代理、本地方法接口等)——第14期

涵盖更多深入的主题&#xff0c;包括性能调优、内存模型、类加载机制等。 什么是Java的内存模型&#xff08;Java Memory Model&#xff09;&#xff1f;它的目的是什么&#xff1f; 答案&#xff1a; Java内存模型定义了多线程程序中各个线程如何访问共享的内存&#xff0c;确…

CSS问题:如何实现瀑布流布局?

前端功能问题系列文章&#xff0c;点击上方合集↑ 序言 大家好&#xff0c;我是大澈&#xff01; 本文约2500字&#xff0c;整篇阅读大约需要4分钟。 本文主要内容分三部分&#xff0c;如果您只需要解决问题&#xff0c;请阅读第一、二部分即可。如果您有更多时间&#xff…

【Python百宝箱】自动化魔法大揭秘:探索强大的自动化工具与技术

前言 在当今数字化时代&#xff0c;机器人流程自动化成为提高工作效率、减少人工重复性工作负担的关键技术之一。通过智能化的自动化工具和框架&#xff0c;开发者能够轻松地实现对图形用户界面的模拟操作、测试任务的自动执行&#xff0c;以及多平台移动应用的自动化测试。本…

JavaEE进阶学习:Bean 作用域和生命周期

1.Bean 作用域 .通过一个案例来看 Bean 作用域的问题 假设现在有一个公共的 Bean&#xff0c;提供给 A 用户和 B 用户使用&#xff0c;然而在使用的途中 A 用户却“悄悄”地修改了公共 Bean 的数据&#xff0c;导致 B 用户在使用时发生了预期之外的逻辑错误。 我们预期的结果…

colab notebook导出为PDF

目录 方法一&#xff1a;使用浏览器打印功能 方法二&#xff1a;使用nbconvert转换 方法三&#xff1a;在线转换 方法一&#xff1a;使用浏览器打印功能 一般快捷键是CTRLP 然后改变目标打印机为另存为PDF 这样就可以将notebook保存为PDF了 方法二&#xff1a;使用nbconver…

芯片技术前沿:了解构现代集成电路的设计与制造

芯片技术前沿&#xff1a;解构现代集成电路的设计与制造 摘要&#xff1a;本文将深入探讨芯片技术的最新进展&#xff0c;重点关注集成电路的设计与制造。我们将带领读者了解芯片设计的基本流程&#xff0c;包括电路分析、版图设计和验证等步骤&#xff0c;并介绍当前主流的制…

强化学习中的深度Q网络

深度 Q 网络&#xff08;Deep Q-Network&#xff0c;DQN&#xff09;是一种结合了深度学习和强化学习的方法&#xff0c;用于解决离散状态和离散动作空间的强化学习问题。DQN 的核心思想是使用深度神经网络来近似 Q 函数&#xff0c;从而学习复杂环境中的最优策略。 以下是 DQN…

Kubernetes之kubeadm日志展示篇—K8S私有云worker节点gluster安装部署

文章目录 一. 服务器信息1.1 环境准备1.2 配置hosts解析记录 二. 安装与部署2.1 配置仓库 &#xff08;所有节点&#xff09;2.2 安装服务 &#xff08;所有节点&#xff09;2.3 启动服务 &#xff08;所有节点&#xff09;2.4 配置资源池 &#xff08;主节点&#xff09;2.5 创…

【C++】日期类的实现

在上篇博客中我们已经学习了C中的运算符重载&#xff0c;我们说&#xff0c;操作符只能对于内置类型进行操作&#xff0c;对自定义类型我们需要自己定义函数去实现一系列的操作 那么这篇博客我们就专门把日期这个类单独拿出来写一下它都有哪些有意义的可以重载的运算符&#xf…

从苹果到蔚来,「车手互联」网罗顶级玩家

作者 |Amy 编辑 |德新 汽车作为家之外的第二大移动空间&#xff0c;正与手机这一移动智能终端进行「车手互联」。 车手互联始于十年前的苹果CarPlay&#xff0c;一度成为时代弄潮儿&#xff0c;不断有后继者模仿并超越。时至今日&#xff0c;CarPlay2.0依旧停留在概念阶段&am…

RK3568笔记六:基于Yolov8的训练及部署

若该文为原创文章&#xff0c;转载请注明原文出处。 基于Yolov8的训练及部署&#xff0c;参考鲁班猫的手册训练自己的数据集部署到RK3568,用的是正点的板子。 1、 使用 conda 创建虚拟环境 conda create -n yolov8 python3.8 ​ conda activate yolov8 2、 安装 pytorch 等…