【PyTorch】PyTorch之Tensors操作篇

文章目录

  • 前言
  • 一、Tensor创建
    • 1、TENSOR
    • 2、SPARSE_COO_TENSOR
    • 3、SPARSE_CSR_TENSOR
    • 4、ASARRAY
    • 5、AS_TENSOR
    • 6、FROM_NUMPY
    • 7、FROMBUFFER
    • 8、ZEROS和ZEROS_LIKE
    • 9、ONES和ONES_LIKE
    • 10、ARANGE
    • 11、LINSPACE
    • 12、LOGSPACE
    • 13、EYE
    • 14、EMPTY和EMPTY_LIKE
    • 15、FULL和FULL_LIKE


前言

介绍Tensor的相关操作API。

一、Tensor创建

1、TENSOR

*torch.tensor(data, , dtype=None, device=None, requires_grad=False, pin_memory=False) → Tensor
Parameters:
data (array_like) – Initial data for the tensor. Can be a list, tuple, NumPy ndarray, scalar, and other types.
Keyword Arguments
dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, infers data type from data.
device (torch.device, optional) – the device of the constructed tensor. If None and data is a tensor then the device of data is used. If None and data is not a tensor then the result tensor is constructed on the current device.
requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.
pin_memory (bool, optional) – If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default: False.

通过复制数据构建一个没有自动求导历史的张量(也称为“叶张量”,请参阅自动求导机制)。
WARNING:
在处理张量时,为了提高可读性,建议使用 torch.Tensor.clone()、torch.Tensor.detach() 和 torch.Tensor.requires_grad_()。如果将 t 视为张量,则 torch.tensor(t) 等同于 t.clone().detach(),而 torch.tensor(t, requires_grad=True) 等同于 t.clone().detach().requires_grad_(True)。

对比torch.as_tensor()和torch.from_numpy() :
torch.as_tensor() 保留自动求导历史,并在可能的情况下避免复制。torch.from_numpy() 创建一个与 NumPy 数组共享存储的张量。

torch.tensor([[0.1, 1.2], [2.2, 3.1], [4.9, 5.2]])torch.tensor([0, 1])  # Type inference on datatorch.tensor([[0.11111, 0.222222, 0.3333333]],dtype=torch.float64,device=torch.device('cuda:0'))  # creates a double tensor on a CUDA devicetorch.tensor(3.14159)  # Create a zero-dimensional (scalar) tensortorch.tensor([])  # Create an empty tensor (of size (0,))

2、SPARSE_COO_TENSOR

*torch.sparse_coo_tensor(indices, values, size=None, , dtype=None, device=None, requires_grad=False, check_invariants=None, is_coalesced=None) → Tensor
Parameters
indices (array_like) – Initial data for the tensor. Can be a list, tuple, NumPy ndarray, scalar, and other types. Will be cast to a torch.LongTensor internally. The indices are the coordinates of the non-zero values in the matrix, and thus should be two-dimensional where the first dimension is the number of tensor dimensions and the second dimension is the number of non-zero values.
values (array_like) – Initial values for the tensor. Can be a list, tuple, NumPy ndarray, scalar, and other types.
size (list, tuple, or torch.Size, optional) – Size of the sparse tensor. If not provided the size will be inferred as the minimum size big enough to hold all non-zero elements.
Keyword Arguments:
dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, infers data type from values.
device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.
requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.
check_invariants (bool, optional) – If sparse tensor invariants are checked. Default: as returned by torch.sparse.check_sparse_tensor_invariants.is_enabled(), initially False.
is_coalesced (bool, optional) – WhenTrue, the caller is responsible for providing tensor indices that correspond to a coalesced tensor. If the check_invariants flag is False, no error will be raised if the prerequisites are not met and this will lead to silently incorrect results. To force coalescion please use coalesce() on the resulting Tensor. Default: None: except for trivial cases (e.g. nnz < 2) the resulting Tensor has is_coalesced set to False`.

使用给定的索引和数值构建 COO(坐标)格式的稀疏张量。
当is_coalesced未指定或设置为 None 时,此函数返回一个未合并的张量。
如果未指定设备参数,则给定值和索引张量的设备必须匹配。然而,如果指定了参数,则输入张量将被转换到指定的设备,并进而确定构建的稀疏张量的设备。

i = torch.tensor([[0, 1, 1],[2, 0, 2]])
v = torch.tensor([3, 4, 5], dtype=torch.float32)
torch.sparse_coo_tensor(i, v, [2, 4])torch.sparse_coo_tensor(i, v)  # Shape inferencetorch.sparse_coo_tensor(i, v, [2, 4],dtype=torch.float64,device=torch.device('cuda:0'))S = torch.sparse_coo_tensor(torch.empty([1, 0]), [], [1])S = torch.sparse_coo_tensor(torch.empty([1, 0]), torch.empty([0, 2]), [1, 2])

3、SPARSE_CSR_TENSOR

*torch.sparse_csr_tensor(crow_indices, col_indices, values, size=None, , dtype=None, device=None, requires_grad=False, check_invariants=None) → Tensor
*Parameters:
crow_indices (array_like) – (B+1)-dimensional array of size (*batchsize, nrows + 1). The last element of each batch is the number of non-zeros. This tensor encodes the index in values and col_indices depending on where the given row starts. Each successive number in the tensor subtracted by the number before it denotes the number of elements in a given row.
col_indices (array_like) – Column co-ordinates of each element in values. (B+1)-dimensional tensor with the same length as values.
values (array_list) – Initial values for the tensor. Can be a list, tuple, NumPy ndarray, scalar, and other types that represents a (1+K)-dimensional tensor where K is the number of dense dimensions.
size (list, tuple, torch.Size, optional) – Size of the sparse tensor: (*batchsize, nrows, ncols, densesize). If not provided, the size will be inferred as the minimum size big enough to hold all non-zero elements.
Keyword Arguments:
dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, infers data type from values.
device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.
requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.
check_invariants (bool, optional) – If sparse tensor invariants are checked. Default: as returned by torch.sparse.check_sparse_tensor_invariants.is_enabled(), initially False.

使用给定的值在指定的行索引(crow_indices)和列索引(col_indices)上构建 CSR(压缩稀疏行)格式的稀疏张量。CSR 格式的稀疏矩阵乘法通常比 COO 格式的稀疏张量更快。请确保查看关于索引数据类型的注释。
如果未指定设备参数,则给定的值和索引张量的设备必须匹配。然而,如果指定了参数,则输入张量将被转换到指定的设备,并由此确定构建的稀疏张量的设备。

crow_indices = [0, 2, 4]
col_indices = [0, 1, 0, 1]
values = [1, 2, 3, 4]
torch.sparse_csr_tensor(torch.tensor(crow_indices, dtype=torch.int64),torch.tensor(col_indices, dtype=torch.int64),torch.tensor(values), dtype=torch.double)

4、ASARRAY

*torch.asarray(obj, , dtype=None, device=None, copy=None, requires_grad=False) → Tensor
Parameters:
obj (object) – a tensor, NumPy array, DLPack Capsule, object that implements Python’s buffer protocol, scalar, or sequence of scalars.
Keyword Arguments:
dtype (torch.dtype, optional) – the datatype of the returned tensor. Default: None, which causes the datatype of the returned tensor to be inferred from obj.
copy (bool, optional) – controls whether the returned tensor shares memory with obj. Default: None, which causes the returned tensor to share memory with obj whenever possible. If True then the returned tensor does not share its memory. If False then the returned tensor shares its memory with obj and an error is thrown if it cannot.
device (torch.device, optional) – the device of the returned tensor. Default: None, which causes the device of obj to be used. Or, if obj is a Python sequence, the current default device will be used.
requires_grad (bool, optional) – whether the returned tensor requires grad. Default: False, which causes the returned tensor not to require a gradient. If True, then the returned tensor will require a gradient, and if obj is also a tensor with an autograd history then the returned tensor will have the same history.

将obj转换为张量。
obj可以是以下之一:

  • 一个张量
  • 一个NumPy数组或NumPy标量
  • 一个DLPack capsule
  • 实现了Python缓冲协议的对象
  • 一个标量
  • 一系列标量

当obj是张量、NumPy数组或DLPack capsule时,默认情况下,返回的张量不需要梯度,具有与对象相同的数据类型,位于相同的设备上,并与其共享内存。这些属性可以通过dtype、device、copy和requires_grad关键字参数进行控制。如果返回的张量具有不同的数据类型,位于不同的设备上,或者请求了副本,那么它将不与对象共享内存。如果requires_grad为True,则返回的张量将需要梯度;如果对象也是具有自动求导历史记录的张量,则返回的张量将具有相同的历史记录。
当对象不是张量、NumPy数组或DLPack capsule,但实现了Python的缓冲协议时,缓冲区将被解释为根据传递给dtype关键字参数的数据类型大小分组的字节数组。(如果没有传递数据类型,则使用默认的浮点数据类型。)返回的张量将具有指定的数据类型(如果未指定,则为默认的浮点数据类型),默认情况下位于CPU设备上,并与缓冲区共享内存。
当对象是NumPy标量时,返回的张量将是一个在CPU上的零维张量,不共享内存(即copy=True)。默认情况下,数据类型将是与NumPy标量的数据类型相对应的PyTorch数据类型。
当对象既不是上述情况,而是一个标量或一系列标量时,默认情况下,返回的张量将从标量值推断其数据类型,位于当前默认设备上,并且不共享内存。
对比torch.tensor() 、torch.from_numpy()、torch.frombuffer() 、torch.from_dlpack() :
torch.tensor() 创建一个张量,它总是从输入对象复制数据。torch.from_numpy() 创建一个张量,它总是与 NumPy 数组共享内存。torch.frombuffer() 创建一个张量,它总是与实现缓冲协议的对象共享内存。torch.from_dlpack() 创建一个张量,它总是与 DLPack capsule共享内存。

a = torch.tensor([1, 2, 3])
# Shares memory with tensor 'a'
b = torch.asarray(a)
a.data_ptr() == b.data_ptr()
# Forces memory copy
c = torch.asarray(a, copy=True)
a.data_ptr() == c.data_ptr()a = torch.tensor([1., 2., 3.], requires_grad=True)
b = a + 2
print(b)
# Shares memory with tensor 'b', with no grad
c = torch.asarray(b)
print(c)
# Shares memory with tensor 'b', retaining autograd history
d = torch.asarray(b, requires_grad=True)
print(d)array = numpy.array([1, 2, 3])
# Shares memory with array 'array'
t1 = torch.asarray(array)
array.__array_interface__['data'][0] == t1.data_ptr()
# Copies memory due to dtype mismatch
t2 = torch.asarray(array, dtype=torch.float32)
array.__array_interface__['data'][0] == t2.data_ptr()scalar = numpy.float64(0.5)
print(torch.asarray(scalar))

5、AS_TENSOR

torch.as_tensor(data, dtype=None, device=None) → Tensor
Parameters:
data (array_like) – Initial data for the tensor. Can be a list, tuple, NumPy ndarray, scalar, and other types.
dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, infers data type from data.
device (torch.device, optional) – the device of the constructed tensor. If None and data is a tensor then the device of data is used. If None and data is not a tensor then the result tensor is constructed on the current device.

将数据转换为张量,尽可能共享数据并保留自动求导历史记录。
如果数据已经是具有请求的dtype和device的张量,则直接返回数据本身,但如果数据是具有不同dtype或device的张量,则会像使用data.to(dtype=dtype, device=device)一样进行复制。
如果数据是一个具有相同dtype和device的NumPy数组(ndarray),则使用torch.from_numpy()构建一个张量。
对比torch.tensor():
torch.tensor() 从不共享其数据,并创建一个新的“leaf tensor”(请参阅自动求导机制)。

a = numpy.array([1, 2, 3])
t = torch.as_tensor(a)
print(t)
t[0] = -1
print(a)a = numpy.array([1, 2, 3])
t = torch.as_tensor(a, device=torch.device('cuda'))
print(t)
t[0] = -1
print(a)

6、FROM_NUMPY

torch.from_numpy(ndarray) → Tensor

从 numpy.ndarray 创建一个张量。
返回的张量和 ndarray 共享相同的内存。对张量的修改将反映在 ndarray 中,反之亦然。返回的张量不可调整大小。
目前,它接受具有以下 dtype 的 ndarray:numpy.float64、numpy.float32、numpy.float16、numpy.complex64、numpy.complex128、numpy.int64、numpy.int32、numpy.int16、numpy.int8、numpy.uint8 和 numpy.bool。

a = numpy.array([1, 2, 3])
t = torch.from_numpy(a)
print(t)
t[0] = -1
print(a)

7、FROMBUFFER

*torch.frombuffer(buffer, , dtype, count=-1, offset=0, requires_grad=False) → Tensor
Parameters:
buffer (object) – a Python object that exposes the buffer interface.
Keyword Arguments:
dtype (torch.dtype) – the desired data type of returned tensor.
count (int, optional) – the number of desired elements to be read. If negative, all the elements (until the end of the buffer) will be read. Default: -1.
offset (int, optional) – the number of bytes to skip at the start of the buffer. Default: 0.
requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

从实现了 Python 缓冲协议的对象创建一个一维张量。

在缓冲区中跳过前 offset 字节,并将剩余的原始字节解释为具有 count 个元素的 dtype 类型的一维张量。
请注意,以下两者之一必须为真:
1、count 是正的非零数字,并且缓冲区中的总字节数小于 offset 加上 count 乘以 dtype 的大小(以字节为单位)。
2、count 是负数,并且从 offset 减去的缓冲区的长度(字节数)是 dtype 大小(以字节为单位)的倍数。
返回的张量和缓冲区共享相同的内存。对张量的修改将反映在缓冲区中,反之亦然。返回的张量不可调整大小。
该函数增加拥有共享内存的对象的引用计数。因此,在返回的张量超出范围之前,此类内存不会被释放。
注意:
1、当传递实现缓冲协议的对象,并且其数据不在 CPU 上时,此函数的行为是未定义的。这样做可能会导致分段错误。
2、此函数不会尝试推断 dtype(因此,它不是可选的)。传递与其源不同的 dtype 可能会导致意外的行为。

import array
a = array.array('i', [1, 2, 3])
t = torch.frombuffer(a, dtype=torch.int32)
print(t)
t[0] = -1
print(a)
# Interprets the signed char bytes as 32-bit integers.
# Each 4 signed char elements will be interpreted as
# 1 signed 32-bit integer.
a = array.array('b', [-1, 0, 0, 0])
torch.frombuffer(a, dtype=torch.int32)

8、ZEROS和ZEROS_LIKE

**torch.zeros(size, , out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor
Parameters:
size (int…) – a sequence of integers defining the shape of the output tensor. Can be a variable number of arguments or a collection like a list or tuple.
Keyword Arguments:
out (Tensor, optional) – the output tensor.
dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, uses a global default (see torch.set_default_tensor_type()).
layout (torch.layout, optional) – the desired layout of returned Tensor. Default: torch.strided.
device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.
requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

返回一个用标量值0填充的张量,其形状由变量参数size定义。
在这里插入图片描述
*torch.zeros_like(input, , dtype=None, layout=None, device=None, requires_grad=False, memory_format=torch.preserve_format) → Tensor
Parameters:
input (Tensor) – the size of input will determine size of the output tensor.
Keyword Arguments:
dtype (torch.dtype, optional) – the desired data type of returned Tensor. Default: if None, defaults to the dtype of input.
layout (torch.layout, optional) – the desired layout of returned tensor. Default: if None, defaults to the layout of input.
device (torch.device, optional) – the desired device of returned tensor. Default: if None, defaults to the device of input.
requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.
memory_format (torch.memory_format, optional) – the desired memory format of returned Tensor. Default: torch.preserve_format.

返回一个以标量值 0 填充的张量,大小与输入相同。torch.zeros_like(input) 等同于 torch.zeros(input.size(), dtype=input.dtype, layout=input.layout, device=input.device)。
从版本 0.4 开始,此函数不支持 out 关键字。作为替代,旧的 torch.zeros_like(input, out=output) 等同于 torch.zeros(input.size(), out=output)。
在这里插入图片描述

9、ONES和ONES_LIKE

**torch.ones(size, , out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor
Parameters:
size (int…) – a sequence of integers defining the shape of the output tensor. Can be a variable number of arguments or a collection like a list or tuple.
Keyword Arguments:
out (Tensor, optional) – the output tensor.
dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, uses a global default (see torch.set_default_tensor_type()).
layout (torch.layout, optional) – the desired layout of returned Tensor. Default: torch.strided.
device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.
requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

返回一个用标量值1填充的张量,其形状由变量参数size定义。
在这里插入图片描述
*torch.ones_like(input, , dtype=None, layout=None, device=None, requires_grad=False, memory_format=torch.preserve_format) → Tensor
Parameters:
input (Tensor) – the size of input will determine size of the output tensor.
Keyword Arguments:
dtype (torch.dtype, optional) – the desired data type of returned Tensor. Default: if None, defaults to the dtype of input.
layout (torch.layout, optional) – the desired layout of returned tensor. Default: if None, defaults to the layout of input.
device (torch.device, optional) – the desired device of returned tensor. Default: if None, defaults to the device of input.
requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.
memory_format (torch.memory_format, optional) – the desired memory format of returned Tensor. Default: torch.preserve_format.

返回一个以标量值 1填充的张量,大小与输入相同。torch.ones_like(input) 等同于 torch.ones(input.size(), dtype=input.dtype, layout=input.layout, device=input.device)。
从版本 0.4 开始,此函数不支持 out 关键字。作为替代,旧的 torch.ones_like(input, out=output) 等同于 torch.ones(input.size(), out=output)。
在这里插入图片描述

10、ARANGE

*torch.arange(start=0, end, step=1, , out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor
Parameters:
start (Number) – the starting value for the set of points. Default: 0.
end (Number) – the ending value for the set of points
step (Number) – the gap between each pair of adjacent points. Default: 1.
Keyword Arguments:
out (Tensor, optional) – the output tensor.
dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, uses a global default (see torch.set_default_tensor_type()). If dtype is not given, infer the data type from the other input arguments. If any of start, end, or stop are floating-point, the dtype is inferred to be the default dtype, see get_default_dtype(). Otherwise, the dtype is inferred to be torch.int64.
layout (torch.layout, optional) – the desired layout of returned Tensor. Default: torch.strided.
device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.
requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

返回一个大小为(end - start)/step 的一维张量,其值来自于从 start 开始,以步长 step 递增的区间 [start, end)。
请注意,当使用非整数步长与 end 比较时,可能受到浮点舍入误差的影响;为了避免不一致性,在这种情况下我们建议从 end 中减去一个小的 epsilon。
在这里插入图片描述
与torch.range的区别:
torch.range()是左闭右闭的,torch.arange()是左闭右开的。

在这里插入图片描述

11、LINSPACE

*torch.linspace(start, end, steps, , out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor
Parameters:
start (float) – the starting value for the set of points
end (float) – the ending value for the set of points
steps (int) – size of the constructed tensor
Keyword Arguments:
out (Tensor, optional) – the output tensor.
dtype (torch.dtype, optional) – the data type to perform the computation in. Default: if None, uses the global default dtype (see torch.get_default_dtype()) when both start and end are real, and corresponding complex dtype when either is complex.
layout (torch.layout, optional) – the desired layout of returned Tensor. Default: torch.strided.
device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.
requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

创建一个大小为 steps 的一维张量,其值从 start 到 end 均匀间隔,包括两端。换句话说,这些值是:
在这里插入图片描述
从 PyTorch 1.11 开始,linspace 需要指定 steps 参数。使用 steps=100 可以恢复先前的行为。
在这里插入图片描述

12、LOGSPACE

*torch.logspace(start, end, steps, base=10.0, , out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor
Parameters:
start (float) – the starting value for the set of points
end (float) – the ending value for the set of points
steps (int) – size of the constructed tensor
base (float, optional) – base of the logarithm function. Default: 10.0.
Keyword Arguments:
out (Tensor, optional) – the output tensor.
dtype (torch.dtype, optional) – the data type to perform the computation in. Default: if None, uses the global default dtype (see torch.get_default_dtype()) when both start and end are real, and corresponding complex dtype when either is complex.
layout (torch.layout, optional) – the desired layout of returned Tensor. Default: torch.strided.
device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.
requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

在这里插入图片描述
在这里插入图片描述

13、EYE

*torch.eye(n, m=None, , out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor
Parameters:
n (int) – the number of rows
m (int, optional) – the number of columns with default being n
Keyword Arguments:
out (Tensor, optional) – the output tensor.
dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, uses a global default (see torch.set_default_tensor_type()).
layout (torch.layout, optional) – the desired layout of returned Tensor. Default: torch.strided.
device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.
requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

在这里插入图片描述

14、EMPTY和EMPTY_LIKE

**torch.empty(size, , out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False, pin_memory=False, memory_format=torch.contiguous_format) → Tensor
Parameters:
size (int…) – a sequence of integers defining the shape of the output tensor. Can be a variable number of arguments or a collection like a list or tuple.
Keyword Arguments:
out (Tensor, optional) – the output tensor.
dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, uses a global default (see torch.set_default_tensor_type()).
layout (torch.layout, optional) – the desired layout of returned Tensor. Default: torch.strided.
device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.
requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.
pin_memory (bool, optional)
– If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default: False.
memory_format (torch.memory_format, optional) – the desired memory format of returned Tensor. Default: torch.contiguous_format.

返回一个未初始化数据的tensor。其形状尺寸由size决定。
如果将 torch.use_deterministic_algorithms() 设置为 True,则输出张量将被初始化,以防止将数据用作操作的输入时出现任何可能的非确定性行为。浮点数和复数张量将被填充为 NaN,而整数张量将被填充为最大值。
在这里插入图片描述
*torch.empty_like(input, , dtype=None, layout=None, device=None, requires_grad=False, memory_format=torch.preserve_format) → Tensor
返回一个未初始化的张量,大小与输入相同。torch.empty_like(input) 等同于 torch.empty(input.size(), dtype=input.dtype, layout=input.layout, device=input.device)。
在这里插入图片描述

15、FULL和FULL_LIKE

*torch.full(size, fill_value, , out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor
Parameters:
size (int…) – a list, tuple, or torch.Size of integers defining the shape of the output tensor.
fill_value (Scalar) – the value to fill the output tensor with.
Keyword Arguments:
out (Tensor, optional) – the output tensor.
dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, uses a global default (see torch.set_default_tensor_type()).
layout (torch.layout, optional) – the desired layout of returned Tensor. Default: torch.strided.
device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.
requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

创建一个大小为 size 的张量,其中的元素值均为 fill_value。张量的数据类型(dtype)将从 fill_value 推断得出。
在这里插入图片描述
torch.full_like(input, fill_value, *, dtype=None, layout=torch.strided, device=None, requires_grad=False, memory_format=torch.preserve_format) → Tensor
返回一个与输入大小相同并填充了 fill_value 的张量。torch.full_like(input, fill_value) 等同于 torch.full(input.size(), fill_value, dtype=input.dtype, layout=input.layout, device=input.device)。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/632946.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

Docker 安装 PHP

Docker 安装 PHP 安装 PHP 镜像 方法一、docker pull php 查找 Docker Hub 上的 php 镜像: 可以通过 Sort by 查看其他版本的 php&#xff0c;默认是最新版本 php:latest。 此外&#xff0c;我们还可以用 docker search php 命令来查看可用版本&#xff1a; runoobrunoob:…

使用C#操作文件:一个实际案例——替换文件中的IP地址

标题&#xff1a; 使用C#操作文件&#xff1a;一个实际案例——替换文件中的IP地址 介绍&#xff1a; 欢迎阅读我的最新博客&#xff01;今天&#xff0c;我们将探讨如何使用C#来处理一个实际的编程挑战&#xff1a;读取一个配置文件并替换其中的IP地址。这是一个非常常见的…

windows安装mysql5.7

看了如何学习mysql后&#xff0c;就开始本地安装mysql&#xff0c;开始学习了。 1.官网下载 官网地址&#xff1a; https://dev.mysql.com/downloads/mysql/ 选择5.7版本 点击 “No thanks, just start my download”开始下载 下载64位的压缩包版 解压下载好的.zip文件&#xf…

Adobe全新AI驱动的Premiere Pro功能消除了枯燥的音频编辑任务

每周跟踪AI热点新闻动向和震撼发展 想要探索生成式人工智能的前沿进展吗&#xff1f;订阅我们的简报&#xff0c;深入解析最新的技术突破、实际应用案例和未来的趋势。与全球数同行一同&#xff0c;从行业内部的深度分析和实用指南中受益。不要错过这个机会&#xff0c;成为AI领…

(2023版)斯坦福CS231n学习笔记:DL与CV教程 (12) | 视觉模型可视化与可解释性(Visualizing and Understanding)

前言 &#x1f4da; 笔记专栏&#xff1a;斯坦福CS231N&#xff1a;面向视觉识别的卷积神经网络&#xff08;23&#xff09;&#x1f517; 课程链接&#xff1a;https://www.bilibili.com/video/BV1xV411R7i5&#x1f4bb; CS231n: 深度学习计算机视觉&#xff08;2017&#xf…

C# dataGridView 列的勾选框改变事件

dataGridView 增加一列 DataGridViewCheckBoxColumn 然后设置复选框值如下图&#xff1a; dataGridView增加两个事件 private void dataGridView1_CurrentCellDirtyStateChanged(object sender, EventArgs e){//提交改变&#xff0c;触发dataGridView1_CellValueChanged事件&…

Android Launcher3各启动场景源码分析

文章目录 一、概述二、开机启动Launcher2.1、开机启动Launcher流程图2.2、开机启动流程源码分析 三、短压Home键启动Launcher3.1、短压Home键启动Launcher流程图3.2、短压Home键启动Launcher源码分析 四、Launcher异常崩溃后的自启动4.1、Launcher异常崩溃后的自启动流程图4.2、…

OpenGL DIR

Mesa简介-CSDN博客 Mesa, also called Mesa3D and The Mesa 3D Graphics Library, is an open source software implementation of OpenGL, Vulkan, and other graphics API specifications. Mesa translates these specifications to vendor-specific graphics ha…

C++进阶(五)二叉搜索树

&#x1f4d8;北尘_&#xff1a;个人主页 &#x1f30e;个人专栏:《Linux操作系统》《经典算法试题 》《C》 《数据结构与算法》 ☀️走在路上&#xff0c;不忘来时的初心 文章目录 一、二叉搜索树概念二、二叉搜索树操作三、二叉搜索树的实现四、二叉搜索树的应用五、二叉搜索…

mac上搭建 hadoop 伪集群

1. hadoop介绍 Hadoop是Apache基金会开发的一个开源的分布式计算平台&#xff0c;主要用于处理和分析大数据。Hadoop的核心设计理念是将计算任务分布到多个节点上&#xff0c;以实现高度可扩展性和容错性。它主要由以下几个部分组成&#xff1a; HDFS (Hadoop Distributed Fi…

PyTorch各种损失函数解析:深度学习模型优化的关键(1)

目录 详解pytorch中各种Loss functions binary_cross_entropy 用途 用法 参数 数学理论 示例代码 binary_cross_entropy_with_logits 用途 用法 参数 数学理论 示例代码 poisson_nll_loss 用途 用法 参数 数学理论 示例代码 cosine_embedding_loss 用途 …

揭秘程序栈:你的代码在幕后是怎么运行的?

计算机科学中&#xff0c;许多概念和原理可能会让开发者感到头疼&#xff0c;比如程序栈。这个看似晦涩的概念&#xff0c;实对我们理解程序运行至关重要。本文将以通俗易懂的方式&#xff0c;带你深入理解程序栈的工作原理和优化策略。 一、为什么需要栈&#xff1f; 栈是一…

Diffusion Models

DDPM x 0 ∼ q ( x 0 ) x_0 \sim q(x_0) x0​∼q(x0​)是真实数据分布&#xff0c;扩散模型学习一个分布 p θ ( x 0 ) p_\theta(x_0) pθ​(x0​)去逼近真实数据分布。 p θ ( x 0 ) : ∫ p θ ( x 0 : T ) d x 1 : T (1) p_\theta(x_0) : \int p_\theta(x_{0:T})dx_{1:T} \…

chatgpt的实用技巧四temperature 格式

四、temperature 格式 GPT3.5 temperature 的范围为&#xff1a;0-0.7&#xff1b; GPT4.0 temperature 的范围为&#xff1a;0-1&#xff1b; 当 temperature 为 0 时候&#xff0c;结果可稳定。 当 temperature 为 0.7/1 时候&#xff0c;结果发散具备创力。 数值越大&a…

设计模式的学习笔记

设计模式的学习笔记 一. 设计模式相关内容介绍 1 设计模式概述 1.1 软件设计模式的产生背景 设计模式最初并不是出现在软件设计中&#xff0c;而是被用于建筑领域的设计中。 1977 年美国著名建筑大师、加利福尼亚大学伯克利分校环境结构中心主任 Christopher Alexander 在…

【Redis数据类型】String实现及应用场景

文章目录 String1、介绍2、内部实现整数值embstr 编码字符串raw编码字符串 3、常用命令4、应用场景缓存对象常规计数分布式锁共享session信息 参考&#xff1a;小林Coding Redis九种数据类型 Redis 提供了丰富的数据类型&#xff0c;常见的有五种&#xff1a;String&#xff08…

Debian 10.13.0 安装图解

引导和开始安装 这里直接回车确认即可&#xff0c;选择图形化安装方式。 选择语言 这里要区分一下&#xff0c;当前选中的语言作为安装过程中安装器所使用的语言&#xff0c;这里我们选择中文简体。不过细心的同学可能发现&#xff0c;当你选择安装器语言之后&#xff0c;后续安…

电力能源三维可视化合集 | 图扑数字孪生

电力能源是现代社会发展和运行的基石&#xff0c;渗透于工业、商业、农业、家庭生活等方方面面&#xff0c;它为经济、生活质量、环境保护和社会发展提供了巨大的机会和潜力。图扑软件应用自研 HT for Web 强大的渲染引擎&#xff0c;助力现代化的电力能源数字孪生场景&#xf…

运筹说 第95期 | 非线性规划奠基人——库恩与塔克

经过之前的学习&#xff0c;相信大家已经对运筹学的网络计划的内容有了一定的了解&#xff0c;接下来小编将带你学习新一章——非线性规划的内容&#xff0c;让我们先来了解一下非线性规划的诞生和发展历程&#xff0c;然后共同走近非线性规划领域的代表人物——库恩和塔克&…

2.控制语句

1.分支语句/判断语句 if 语句 if(boolean_expression) { /* 如果布尔表达式为真将执行的语句 */ } if…else 语句 if(boolean_expression) { /* 如果布尔表达式为真将执行的语句 / } else { / 如果布尔表达式为假将执行的语句 */ } if…else if…else语句 if(boolean_expressi…