PyTorch中的pyi檔案生成機制

PyTorch中的pyi檔案生成機制

  • 前言
  • pyi檔
  • 由py生成pyi.in
  • 由pyi.in生成pyi
    • torch/CMakeLists.txt
    • tools/pyi/gen_pyi.py
    • gen_pyi
      • native_functions
        • rand.names & rand.names_out
        • rand.generator_with_names & rand.generator_with_names_out
        • rand
        • rand.generator
        • rand.out
        • rand.generator_out
        • add.Tensor && add.out
        • add_.Tensor && add.out
        • add.out
      • function_signatures
        • rand.names & rand.names_out
        • rand.generator_with_names & rand.generator_with_names_out
        • rand
        • rand.generator
        • rand.out
        • rand.generator_out
        • add.Tensor && add.out
        • add_.Tensor && add.out
        • add.out
      • sig_groups
        • rand.generator_with_names & rand.generator_with_names_out
        • rand.generator & rand.generator_out
        • rand.names & rand.names_out
        • rand & rand.out
        • add.Tensor & add.out
        • add & add.Tensor & add.out
      • unsorted_function_hints
        • rand
        • add
      • function_hints
        • rand
        • add
      • hinted_function_names
      • all_symbols
      • all_directive
      • env
    • gen_nn_functional
    • datapipe.pyi
    • 生成結果
  • 使用pyi做類型檢查

前言

在PyTorch中如果查找python函數的定義,十有八九會跳轉到torch/_C/_VariableFunctions.pyi這個檔案。但是如果去PyTorch的github repo上尋找這個檔案,只能找到一個跟它名字類似的torch/_C/_VariableFunctions.pyi.in,卻找不到torch/_C/_VariableFunctions.pyi這個檔案本身。

如果打開torch/_C/_VariableFunctions.pyi去看:

# @generated from torch/_C/_VariableFunctions.pyi.in

才發現原來首行就說了:它是在編譯時才由torch/_C/_VariableFunctions.pyi.in動態生成的。

本篇就是要探討PyTorch中pyi檔案的生成機制。pyi檔案的生成過程大致可分為以下兩步:

  1. 由py生成pyi.in

  2. 由pyi.in生成pyi

但在這之前,先來看一下pyi檔在Python中的作用為何。

pyi檔

首先來看一下pyi這個檔案類型的名稱由來,根據What does “i” represent in Python .pyi extension?:

The i in .pyi stands for ‘interface’.The .pyi extension was first mentioned in this GitHub issue thread where JukkaL says:I'd probably prefer an extension with just a single dot. It also needs to be something that is not in use (it should not be used by cython, etc.). .pys seems to be used in Windows (or was). Maybe .pyi, where i stands for an interface definition?

可以知道,pyi中的i代表的是interface。

pyi implements "stub" file (definition from Martin Fowler)Stubs: provide canned answers to calls made during the test, usually not responding at all to anything outside what's programmed in for the test.

而它代表的涵意則是stub(樁/存根),詳見樁 (計算機):

樁[1](Stub / Method Stub )是指用來替換一部分功能的程序段。樁程序可以用來模擬已有程序的行為(比如一個遠端機器的過程)或是對將要開發的代碼的一種臨時替代。因此,打樁技術在程序移植、分布式計算、通用軟體開發和測試中用處很大。

正如pyi文件是干嘛的?(一文读懂Python的存根文件和类型检查)中所說,而pyi檔的作用只是在IDE中給type hint的,並不是必須的。

在PyTorch中也是一樣,torch/_C/_VariableFunctions.pyi僅用於類型提示。Python函數與C++函數的關聯實際上是由torch/csrc/autograd/generated/python_torch_functions_i.cpp所指定,而該檔案也是在編譯時自動生成的,詳見PyTorch中的python_torch_functions_i.cpp檔案生成機制。

由py生成pyi.in

PyTorch源碼中有以下.pyi.in檔:

torch/_C/__init__.pyi.in
torch/_C/_nn.pyi.in
torch/_C/return_types.pyi.in
torch/_C/_VariableFunctions.pyi.in
torch/nn/functional.pyi.in
torch/utils/data/datapipes/datapipe.pyi.in

根據torch/nn/functional.pyi.in中的注釋:

# These stubs were generated by running stubgen (`stubgen --parse-only functional.py`), followed by manual cleaning.

functional.pyi.in是用mypystubgen工具由functional.py生成後手動編輯而成的。

試著自己對torch/nn/functional.py跑跑看stubgen,首先把functional.py這個檔案複製到一個合適的地方,然後下:

stubgen functional.py

如果出現以下跟import相關的錯誤,先手動把對應的行數注釋掉就好:

Critical error during semantic analysis: functional.py:23: error: No parent module -- cannot perform relative import
functional.py:24: error: No parent module -- cannot perform relative import

先只關注以下這一段:

def fractional_max_pool2d_with_indices(input: Tensor, kernel_size: BroadcastingList2[int],output_size: Optional[BroadcastingList2[int]] = None,output_ratio: Optional[BroadcastingList2[float]] = None,return_indices: bool = False,_random_samples: Optional[Tensor] = None
) -> Tuple[Tensor, Tensor]:# ...fractional_max_pool2d = boolean_dispatch(arg_name="return_indices",arg_index=4,default=False,if_true=fractional_max_pool2d_with_indices,if_false=_fractional_max_pool2d,module_name=__name__,func_name="fractional_max_pool2d",
)

生成的functional.pyi裡對應的內容:

# ...
def fractional_max_pool2d_with_indices(input: Tensor, kernel_size: BroadcastingList2[int], output_size: Optional[BroadcastingList2[int]] = ..., output_ratio: Optional[BroadcastingList2[float]] = ..., return_indices: bool = ..., _random_samples: Optional[Tensor] = ...) -> Tuple[Tensor, Tensor]: ...fractional_max_pool2d: Incomplete
# ...

fractional_max_pool2d_with_indices這個函數的簽名與原本的幾乎一致,而fractional_max_pool2d則因為無法推斷被標注為Incomplete

照理說.pyi.in檔是由.py檔生成的,但是torch/_C目錄下的.pyi.in檔都沒有對應的.py檔,推測是由多個.py檔合併到同一個.pyi.in檔而來的。

由pyi.in生成pyi

一般來說.pyi檔是由stubgen生成的,但在PyTorch中則是先用stubgen生成並手動編輯後得到pyi.in檔,然後再利用Python腳本由.pyi.in檔生成的。

torch/CMakeLists.txt

torch/CMakeLists.txt

新增一個名為torch_python_stubs的custom target,依賴於如下的pyi檔。(關於add_custom_target和接下來會看到的add_custom_command詳見cmake的add_custom_command及add_custom_target。)

add_custom_target(torch_python_stubs DEPENDS"${TORCH_SRC_DIR}/_C/__init__.pyi""${TORCH_SRC_DIR}/_C/_VariableFunctions.pyi""${TORCH_SRC_DIR}/nn/functional.pyi""${TORCH_SRC_DIR}/utils/data/datapipes/datapipe.pyi"
)

查看如下add_custom_commandOUTPUT參數,可以知道這個custom command正是用於生成torch_python_stubs所依賴的前三個pyi檔。至於剩下的datapipe.pyi是如何生成的,詳見datapipe.pyi章節。

file(GLOB_RECURSE torchgen_python "${PROJECT_SOURCE_DIR}/torchgen/*.py")
file(GLOB_RECURSE autograd_python "${TOOLS_PATH}/autograd/*.py")
file(GLOB_RECURSE pyi_python "${TOOLS_PATH}/pyi/*.py")
add_custom_command(OUTPUT"${TORCH_SRC_DIR}/_C/__init__.pyi""${TORCH_SRC_DIR}/_C/_VariableFunctions.pyi""${TORCH_SRC_DIR}/nn/functional.pyi"COMMAND"${PYTHON_EXECUTABLE}" -mtools.pyi.gen_pyi--native-functions-path "aten/src/ATen/native/native_functions.yaml"--tags-path "aten/src/ATen/native/tags.yaml"--deprecated-functions-path "tools/autograd/deprecated.yaml"DEPENDS"${TORCH_SRC_DIR}/_C/__init__.pyi.in""${TORCH_SRC_DIR}/_C/_VariableFunctions.pyi.in""${TORCH_SRC_DIR}/nn/functional.pyi.in""${TORCH_ROOT}/aten/src/ATen/native/native_functions.yaml""${TORCH_ROOT}/aten/src/ATen/native/tags.yaml""${TORCH_ROOT}/tools/autograd/deprecated.yaml"${pyi_python}${autograd_python}${torchgen_python}WORKING_DIRECTORY"${TORCH_ROOT}"
)

這一段的入口是add_custom_command中的COMMAND,它透過"${PYTHON_EXECUTABLE}" -mtools.pyi.gen_pyi調用tools/pyi/gen_pyi.py,輸入則是DEPENDS區塊中寫的_C/__init__.pyi.in, _C/_VariableFunctions.pyi.innn/functional.pyi.in,程序執行完後會生成OUTPUT區塊中寫的三個pyi檔。

torch/_C/_nn.pyitorch/_C/return_types.pyi也是由tools/pyi/gen_pyi.py生成的,為什麼沒寫在add_custom_targetadd_custom_commandDEPENDSOUTPUT裡?

新增一個名為torch_python的shared library,運行過後會生成build/lib/libtorch_python.so

add_library(torch_python SHARED ${TORCH_PYTHON_SRCS})

接著宣告torch_python依賴於torch_python_stubs這個custom target。

add_dependencies(torch_python torch_python_stubs)

在非MacOS的系統上都會建構一個名為nnapi_backend的library,它的依賴中就有torch_python

# Skip building this library under MacOS, since it is currently failing to build on Mac
# Github issue #61930
if(NOT ${CMAKE_SYSTEM_NAME} MATCHES "Darwin")# Add Android Nnapi delegate libraryadd_library(nnapi_backend SHARED${TORCH_SRC_DIR}/csrc/jit/backends/nnapi/nnapi_backend_lib.cpp${TORCH_SRC_DIR}/csrc/jit/backends/nnapi/nnapi_backend_preprocess.cpp)# Pybind11 requires explicit linking of the torch_python librarytarget_link_libraries(nnapi_backend PRIVATE torch torch_python pybind::pybind11)
endif()

總結一下,就是有nnapi_backend -> torch_python -> torch_python_stubs -> torch/_C/__init__.pyi, torch/_C/_VariableFunctions.pyi, torch/nn/functional.pyi間的層層依賴,所以要建構nnapi_backend這個library時才會調用tools/pyi/gen_pyi.py去生成.pyi檔。

tools/pyi/gen_pyi.py

CMakeLists.txt中透過"${PYTHON_EXECUTABLE}" -mtools.pyi.gen_pyi調用了tools/pyi/gen_pyi.py, 它的功用是由.pyi.in檔生成.pyi檔。

def main() -> None:parser = argparse.ArgumentParser(description="Generate type stubs for PyTorch")parser.add_argument("--native-functions-path",metavar="NATIVE",default="aten/src/ATen/native/native_functions.yaml",help="path to native_functions.yaml",)parser.add_argument("--tags-path",metavar="TAGS",default="aten/src/ATen/native/tags.yaml",help="path to tags.yaml",)parser.add_argument("--deprecated-functions-path",metavar="DEPRECATED",default="tools/autograd/deprecated.yaml",help="path to deprecated.yaml",)parser.add_argument("--out", metavar="OUT", default=".", help="path to output directory")args = parser.parse_args()fm = FileManager(install_dir=args.out, template_dir=".", dry_run=False)gen_pyi(args.native_functions_path, args.tags_path, args.deprecated_functions_path, fm)if __name__ == "__main__":main()

gen_pyi.py中的注釋:

- We start off with a hand-written __init__.pyi.in file.  Thisfile contains type definitions for everything we cannot automaticallygenerate, including pure Python definitions directly in __init__.py(the latter case should be pretty rare).- We go through automatically bound functions based on thetype information recorded in native_functions.yaml andgenerate type hints for them (generate_type_hints)

native_functions.yaml中記錄了自動綁定函數(automatically bound functions,猜測是Python與C++函數的綁定)的類型資訊,gen_pyi.py會依據這些類型資訊用generate_type_hints函數(待會會在unsorted_function_hints一節出現)生成類型提示。

gen_pyi

tools/pyi/gen_pyi.py

這個函數的功用是由_C/__init__.pyi.in, _C/_VariableFunctions.pyi.intorch/_C/return_types.pyi.in生成_C/__init__.pyi, _C/_VariableFunctions.pyi, torch/_VF.pyitorch/return_types.pyi

def gen_pyi(native_yaml_path: str,tags_yaml_path: str,deprecated_yaml_path: str,fm: FileManager,
) -> None:"""gen_pyi()This function generates a pyi file for torch."""# ...

前三個參數預設為:

  • native_yaml_pathaten/src/ATen/native/native_functions.yaml
  • tags_yaml_pathaten/src/ATen/native/tags.yaml
  • deprecated_yaml_pathtools/autograd/deprecated.yaml

fm建構子中的兩個參數如下:

  • install_dirargs.out,也就是’.’
  • template_dir:‘.’

native_functions

解析native_functions.yamltags.yaml,得到native_functions變數:

    native_functions = parse_native_yaml(native_yaml_path, tags_yaml_path).native_functionsnative_functions = list(filter(should_generate_py_binding, native_functions))

native_functions是一個NativeFunction的列表,表示aten命名空間裡的函數,其第零個元素如下:

NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='_cast_Byte', inplace=False, dunder_method=False, functional_overload=False), overload_name=''), arguments=Arguments(pre_self_positional=(), self_arg=SelfArgument(argument=Argument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None)), post_self_positional=(Argument(name='non_blocking', type=BaseType(name=<BaseTy.bool: 9>), default='False', annotation=None),), pre_tensor_options_kwarg_only=(), tensor_options=None, post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.ExactSame: 1>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=9), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=False, has_composite_implicit_autograd_kernel=True, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=False, has_composite_explicit_autograd_non_functional_kernel=False, tags=set())

代表rand函數的元素如下,aten::rand函數有六種overload name,分別為names, generator_with_names, 空字串, generator, out, generator_out。可與native_functions.yaml交互參看:

rand.names & rand.names_out
- func: rand.names(SymInt[] size, *, Dimname[]? names, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensordevice_check: NoCheckdevice_guard: Falsedispatch:CompositeExplicitAutograd: randautogen: rand.names_outtags: nondeterministic_seeded

yaml檔中的autogen欄位中有rand.names_out,對照native_functions中的元素,可以發現NativeFunctionautogen成員也有一個overload_namenames_outOperatorName

NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), overload_name='names'), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='names', type=OptionalType(elem=ListType(elem=BaseType(name=<BaseTy.Dimname: 5>), size=None)), default=None, annotation=None),), tensor_options=TensorOptionsArguments(dtype=Argument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', annotation=None), layout=Argument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', annotation=None), device=Argument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', annotation=None), pin_memory=Argument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='None', annotation=None)), post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=False, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4254), autogen=[OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), overload_name='names_out')], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False,has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=True, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'})
rand.generator_with_names & rand.generator_with_names_out
- func: rand.generator_with_names(SymInt[] size, *, Generator? generator, Dimname[]? names, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensordevice_check: NoCheckdevice_guard: Falsetags: nondeterministic_seededdispatch:CompositeExplicitAutograd: randautogen: rand.generator_with_names_out

yaml檔中的autogen欄位中有rand.generator_with_names_out,對照下面,可以發現NativeFunctionautogen成員也有一個overload_namegenerator_with_names_outOperatorName

NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), overload_name='generator_with_names'), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='generator', type=OptionalType(elem=BaseType(name=<BaseTy.Generator: 1>)), default=None, annotation=None), Argument(name='names', type=OptionalType(elem=ListType(elem=BaseType(name=<BaseTy.Dimname: 5>), size=None)), default=None, annotation=None)), tensor_options=TensorOptionsArguments(dtype=Argument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', annotation=None), layout=Argument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', annotation=None), device=Argument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', annotation=None), pin_memory=Argument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='None', annotation=None)), post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=False, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4262), autogen=[OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), overload_name='generator_with_names_out')], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=True, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'})
rand
- func: rand(SymInt[] size, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensortags: nondeterministic_seededdispatch:CompositeExplicitAutograd: rand
NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), overload_name=''), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(), tensor_options=TensorOptionsArguments(dtype=Argument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', annotation=None), layout=Argument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', annotation=None), device=Argument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', annotation=None), pin_memory=Argument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='None', annotation=None)), post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.ExactSame: 1>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4270), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=True, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'})
rand.generator
- func: rand.generator(SymInt[] size, *, Generator? generator, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensortags: nondeterministic_seededdispatch:CompositeExplicitAutograd: rand
NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), overload_name='generator'), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='generator', type=OptionalType(elem=BaseType(name=<BaseTy.Generator: 1>)), default=None, annotation=None),), tensor_options=TensorOptionsArguments(dtype=Argument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', annotation=None), layout=Argument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', annotation=None), device=Argument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', annotation=None), pin_memory=Argument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='None', annotation=None)), post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.ExactSame: 1>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4275), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=True, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'})
rand.out
- func: rand.out(SymInt[] size, *, Tensor(a!) out) -> Tensor(a!)tags: nondeterministic_seededdispatch:CompositeExplicitAutograd: rand_out
NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), overload_name='out'), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(), tensor_options=None, post_tensor_options_kwarg_only=(), out=(Argument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.ExactSame: 1>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4280), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=True, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'})
rand.generator_out
- func: rand.generator_out(SymInt[] size, *, Generator? generator, Tensor(a!) out) -> Tensor(a!)tags: nondeterministic_seeded
NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), overload_name='generator_out'), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='generator', type=OptionalType(elem=BaseType(name=<BaseTy.Generator: 1>)), default=None, annotation=None),), tensor_options=None, post_tensor_options_kwarg_only=(), out=(Argument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.ExactSame: 1>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4285), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=False, has_composite_implicit_autograd_kernel=True, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=False, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'})

因為rand.namesrand.generator_with_names會生成對應的out版本的函數,所以由native_functions.yaml裡六個rand相關函數最後可以生成C++ aten命名空間裡的八個函數。

add.Tensor && add.out

傳入selfother,回傳結果的add函數。

- func: add.Tensor(Tensor self, Tensor other, *, Scalar alpha=1) -> Tensordevice_check: NoCheck   # TensorIteratorstructured_delegate: add.outvariants: function, methoddispatch:SparseCPU, SparseCUDA: add_sparseSparseCsrCPU, SparseCsrCUDA: add_sparse_csrMkldnnCPU: mkldnn_addZeroTensor: add_zerotensorNestedTensorCPU, NestedTensorCUDA: NestedTensor_add_Tensortags: [canonical, pointwise]
NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='add', inplace=False, dunder_method=False, functional_overload=False), overload_name='Tensor'), arguments=Arguments(pre_self_positional=(), self_arg=SelfArgument(argument=Argument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None)), post_self_positional=(Argument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='alpha', type=BaseType(name=<BaseTy.Scalar: 12>), default='1', annotation=None),), tensor_options=None, post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={<Variant.function: 1>, <Variant.method: 2>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=497), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=OperatorName(name=BaseOperatorName(base='add', inplace=False, dunder_method=False, functional_overload=False), overload_name='out'), structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=False, has_composite_explicit_autograd_non_functional_kernel=False, tags={'pointwise', 'canonical'})
add_.Tensor && add.out

直接修改self參數的inplace版本。

- func: add_.Tensor(Tensor(a!) self, Tensor other, *, Scalar alpha=1) -> Tensor(a!)device_check: NoCheck   # TensorIteratorvariants: methodstructured_delegate: add.outdispatch:SparseCPU, SparseCUDA: add_sparse_SparseCsrCPU, SparseCsrCUDA: add_sparse_csr_MkldnnCPU: mkldnn_add_NestedTensorCPU, NestedTensorCUDA: NestedTensor_add__Tensortags: pointwise

根據pytorch native README.md:

Tensor(a!) - members of a may be written to thus mutating the underlying data.

Tensor(a!) self這個寫法表示self參數同時是入參也是出參。

NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='add', inplace=True, dunder_method=False, functional_overload=False), overload_name='Tensor'), arguments=Arguments(pre_self_positional=(), self_arg=SelfArgument(argument=Argument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=()))), post_self_positional=(Argument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='alpha', type=BaseType(name=<BaseTy.Scalar: 12>), default='1', annotation=None),), tensor_options=None, post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={<Variant.method: 2>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=509), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=OperatorName(name=BaseOperatorName(base='add', inplace=False, dunder_method=False, functional_overload=False), overload_name='out'), structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=False, has_composite_explicit_autograd_non_functional_kernel=False, tags={'pointwise'})
add.out

有出參out版本的add函數。

- func: add.out(Tensor self, Tensor other, *, Scalar alpha=1, Tensor(a!) out) -> Tensor(a!)device_check: NoCheck   # TensorIteratorstructured: Truestructured_inherits: TensorIteratorBaseufunc_inner_loop:Generic: add (AllAndComplex, BFloat16, Half, ComplexHalf)ScalarOnly: add (Bool)dispatch:SparseCPU: add_out_sparse_cpuSparseCUDA: add_out_sparse_cudaSparseCsrCPU: add_out_sparse_csr_cpuSparseCsrCUDA: add_out_sparse_csr_cudaMkldnnCPU: mkldnn_add_outMPS: add_out_mpstags: pointwise
NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='add', inplace=False, dunder_method=False, functional_overload=False), overload_name='out'), arguments=Arguments(pre_self_positional=(), self_arg=SelfArgument(argument=Argument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None)), post_self_positional=(Argument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='alpha', type=BaseType(name=<BaseTy.Scalar: 12>), default='1', annotation=None),), tensor_options=None, post_tensor_options_kwarg_only=(), out=(Argument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=520), autogen=[], ufunc_inner_loop={<UfuncKey.Generic: 7>: UfuncInnerLoop(name='add', supported_dtypes=<torchgen.utils.OrderedSet object at 0x7f600cff7910>, ufunc_key=<UfuncKey.Generic: 7>), <UfuncKey.ScalarOnly: 6>: UfuncInnerLoop(name='add', supported_dtypes=<torchgen.utils.OrderedSet object at 0x7f600cff7b80>, ufunc_key=<UfuncKey.ScalarOnly: 6>)}, structured=True, structured_delegate=None, structured_inherits='TensorIteratorBase', precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=False, has_composite_explicit_autograd_non_functional_kernel=False, tags={'pointwise'})

function_signatures

    function_signatures = load_signatures(native_functions, deprecated_yaml_path, method=False, pyi=True)

function_signatures是一個PythonSignatureNativeFunctionPair的列表,其第零個元素如下:

PythonSignatureNativeFunctionPair(signature=PythonSignature(name='_cast_Byte', input_args=(PythonArgument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None), PythonArgument(name='non_blocking', type=BaseType(name=<BaseTy.bool: 9>), default='False', default_init=None)), input_kwargs=(), output_args=None, returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), tensor_options_args=(), method=False), function=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='_cast_Byte', inplace=False, dunder_method=False, functional_overload=False), overload_name=''), arguments=Arguments(pre_self_positional=(), self_arg=SelfArgument(argument=Argument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None)), post_self_positional=(Argument(name='non_blocking', type=BaseType(name=<BaseTy.bool: 9>), default='False', annotation=None),), pre_tensor_options_kwarg_only=(), tensor_options=None, post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.ExactSame: 1>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=9), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=False, has_composite_implicit_autograd_kernel=True, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=False, has_composite_explicit_autograd_non_functional_kernel=False, tags=set()))

代表rand函數的元素如下,共六個,可與剛才的native_functions一一對應:

rand.names & rand.names_out
PythonSignatureNativeFunctionPair(signature=PythonSignature(name='rand', input_args=(PythonArgument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, default_init=None),), input_kwargs=(PythonArgument(name='names', type=OptionalType(elem=ListType(elem=BaseType(name=<BaseTy.Dimname: 5>), size=None)), default=None, default_init=None),), output_args=None, returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), tensor_options_args=(PythonArgument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', default_init=None), PythonArgument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', default_init=None), PythonArgument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', default_init='torch::tensors::get_default_device()'), PythonArgument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None), PythonArgument(name='requires_grad', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None)), method=False), function=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), 
overload_name='names'), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='names', type=OptionalType(elem=ListType(elem=BaseType(name=<BaseTy.Dimname: 5>), size=None)), default=None, annotation=None),), tensor_options=TensorOptionsArguments(dtype=Argument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', annotation=None), layout=Argument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', annotation=None), device=Argument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', annotation=None), pin_memory=Argument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='None', annotation=None)), post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=False, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4254), 
autogen=[OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), overload_name='names_out')], 
ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=True, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'}))

注意names會生成names_out函數。

rand.generator_with_names & rand.generator_with_names_out
PythonSignatureNativeFunctionPair(signature=PythonSignature(name='rand', input_args=(PythonArgument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, default_init=None),), input_kwargs=(PythonArgument(name='generator', type=OptionalType(elem=BaseType(name=<BaseTy.Generator: 1>)), default=None, default_init=None), PythonArgument(name='names', type=OptionalType(elem=ListType(elem=BaseType(name=<BaseTy.Dimname: 5>), size=None)), default=None, default_init=None)), output_args=None, returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), tensor_options_args=(PythonArgument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', default_init=None), PythonArgument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', default_init=None), PythonArgument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', default_init='torch::tensors::get_default_device()'), PythonArgument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None), PythonArgument(name='requires_grad', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None)), method=False), function=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), 
overload_name='generator_with_names'), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='generator', type=OptionalType(elem=BaseType(name=<BaseTy.Generator: 1>)), default=None, annotation=None), Argument(name='names', type=OptionalType(elem=ListType(elem=BaseType(name=<BaseTy.Dimname: 5>), size=None)), default=None, annotation=None)), tensor_options=TensorOptionsArguments(dtype=Argument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', annotation=None), layout=Argument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', annotation=None), device=Argument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', annotation=None), pin_memory=Argument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='None', annotation=None)), post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=False, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4262), 
autogen=[OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), overload_name='generator_with_names_out')], 
ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=True, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'}))

注意generator_with_names會生成generator_with_names_out函數。

rand
PythonSignatureNativeFunctionPair(signature=PythonSignature(name='rand', input_args=(PythonArgument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, default_init=None),), input_kwargs=(), output_args=None, returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), tensor_options_args=(PythonArgument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', default_init=None), PythonArgument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', default_init=None), PythonArgument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', default_init='torch::tensors::get_default_device()'), PythonArgument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None), PythonArgument(name='requires_grad', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None)), method=False), function=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), 
overload_name=''), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(), tensor_options=TensorOptionsArguments(dtype=Argument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', annotation=None), layout=Argument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', annotation=None), device=Argument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', annotation=None), pin_memory=Argument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='None', annotation=None)), post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.ExactSame: 1>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4270), 
autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=True, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'}))
rand.generator
PythonSignatureNativeFunctionPair(signature=PythonSignature(name='rand', input_args=(PythonArgument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, default_init=None),), input_kwargs=(PythonArgument(name='generator', type=OptionalType(elem=BaseType(name=<BaseTy.Generator: 1>)), default=None, default_init=None),), output_args=None, returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), tensor_options_args=(PythonArgument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', default_init=None), PythonArgument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', default_init=None), PythonArgument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', default_init='torch::tensors::get_default_device()'), PythonArgument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None), PythonArgument(name='requires_grad', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None)), method=False), function=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), 
overload_name='generator'), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='generator', type=OptionalType(elem=BaseType(name=<BaseTy.Generator: 1>)), default=None, annotation=None),), tensor_options=TensorOptionsArguments(dtype=Argument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', annotation=None), layout=Argument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', annotation=None), device=Argument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', annotation=None), pin_memory=Argument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='None', annotation=None)), post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.ExactSame: 1>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4275), 
autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=True, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'}))
rand.out
PythonSignatureNativeFunctionPair(signature=PythonSignature(name='rand', input_args=(PythonArgument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, default_init=None),), input_kwargs=(), output_args=PythonOutArgument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default='None', default_init=None, outputs=(PythonArgument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None),)), returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), tensor_options_args=(PythonArgument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', default_init=None), PythonArgument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', default_init=None), PythonArgument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', default_init='torch::tensors::get_default_device()'), PythonArgument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None), PythonArgument(name='requires_grad', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None)), method=False), function=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), 
overload_name='out'), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(), tensor_options=None, post_tensor_options_kwarg_only=(), out=(Argument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.ExactSame: 1>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4280), 
autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=True, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'}))
rand.generator_out
PythonSignatureNativeFunctionPair(signature=PythonSignature(name='rand', input_args=(PythonArgument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, default_init=None),), input_kwargs=(PythonArgument(name='generator', type=OptionalType(elem=BaseType(name=<BaseTy.Generator: 1>)), default=None, default_init=None),), output_args=PythonOutArgument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default='None', default_init=None, outputs=(PythonArgument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None),)), returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), tensor_options_args=(PythonArgument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', default_init=None), PythonArgument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', default_init=None), PythonArgument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', default_init='torch::tensors::get_default_device()'), PythonArgument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None), PythonArgument(name='requires_grad', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None)), method=False), function=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), 
overload_name='generator_out'), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='generator', type=OptionalType(elem=BaseType(name=<BaseTy.Generator: 1>)), default=None, annotation=None),), tensor_options=None, post_tensor_options_kwarg_only=(), out=(Argument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.ExactSame: 1>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4285), 
autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=False, has_composite_implicit_autograd_kernel=True, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=False, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'}))

代表add函數的元素如下,共三個,也可與剛才的native_functions一一對應。

add.Tensor && add.out
PythonSignatureNativeFunctionPair(signature=PythonSignature(name='add', input_args=(PythonArgument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None), PythonArgument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None)), input_kwargs=(PythonArgument(name='alpha', type=BaseType(name=<BaseTy.Scalar: 12>), default='1', default_init=None),), output_args=None, returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), tensor_options_args=(), method=False), function=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='add', inplace=False, dunder_method=False, functional_overload=False), overload_name='Tensor'), arguments=Arguments(pre_self_positional=(), self_arg=SelfArgument(argument=Argument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None)), post_self_positional=(Argument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='alpha', type=BaseType(name=<BaseTy.Scalar: 12>), default='1', annotation=None),), tensor_options=None, post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={<Variant.function: 1>, <Variant.method: 2>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=497), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=OperatorName(name=BaseOperatorName(base='add', inplace=False, dunder_method=False, functional_overload=False), overload_name='out'), structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=False, has_composite_explicit_autograd_non_functional_kernel=False, tags={'pointwise', 'canonical'}))
add_.Tensor && add.out
PythonSignatureNativeFunctionPair(signature=PythonSignature(name='add_', input_args=(PythonArgument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None), PythonArgument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None)), input_kwargs=(PythonArgument(name='alpha', type=BaseType(name=<BaseTy.Scalar: 12>), default='1', default_init=None),), output_args=None, returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), tensor_options_args=(), method=False), function=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='add', inplace=True, dunder_method=False, functional_overload=False), overload_name='Tensor'), arguments=Arguments(pre_self_positional=(), self_arg=SelfArgument(argument=Argument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=()))), post_self_positional=(Argument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='alpha', type=BaseType(name=<BaseTy.Scalar: 12>), default='1', annotation=None),), tensor_options=None, post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={<Variant.method: 2>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=509), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=OperatorName(name=BaseOperatorName(base='add', inplace=False, dunder_method=False, functional_overload=False), overload_name='out'), structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=False, has_composite_explicit_autograd_non_functional_kernel=False, tags={'pointwise'}))
add.out
PythonSignatureNativeFunctionPair(signature=PythonSignature(name='add', input_args=(PythonArgument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None), PythonArgument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None)), input_kwargs=(PythonArgument(name='alpha', type=BaseType(name=<BaseTy.Scalar: 12>), default='1', default_init=None),), output_args=PythonOutArgument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default='None', default_init=None, outputs=(PythonArgument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None),)), returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), tensor_options_args=(), method=False), function=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='add', inplace=False, dunder_method=False, functional_overload=False), overload_name='out'), arguments=Arguments(pre_self_positional=(), self_arg=SelfArgument(argument=Argument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None)), post_self_positional=(Argument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='alpha', type=BaseType(name=<BaseTy.Scalar: 12>), default='1', annotation=None),), tensor_options=None, post_tensor_options_kwarg_only=(), out=(Argument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=520), autogen=[], ufunc_inner_loop={<UfuncKey.Generic: 7>: UfuncInnerLoop(name='add', supported_dtypes=<torchgen.utils.OrderedSet object at 0x7f600cff7910>, ufunc_key=<UfuncKey.Generic: 7>), <UfuncKey.ScalarOnly: 6>: UfuncInnerLoop(name='add', supported_dtypes=<torchgen.utils.OrderedSet object at 0x7f600cff7b80>, ufunc_key=<UfuncKey.ScalarOnly: 6>)}, structured=True, structured_delegate=None, structured_inherits='TensorIteratorBase', precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=False, has_composite_explicit_autograd_non_functional_kernel=False, tags={'pointwise'}))

sig_groups

sig_groups是一個PythonSignatureGroup的列表,PythonSignatureGroup則是由PythonSignatureNativeFunction組成的pair。

PythonSignatureGroupPythonSignatureNativeFunctionPair比起來多了一個outplace

    sig_groups = get_py_torch_functions(function_signatures)

sig_groups的第零個元素如下:

PythonSignatureGroup(signature=PythonSignature(name='__and__', input_args=(PythonArgument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None), PythonArgument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None)), input_kwargs=(), output_args=None, returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), tensor_options_args=(), method=False), base=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='and', inplace=False, dunder_method=True, functional_overload=False), overload_name='Tensor'), arguments=Arguments(pre_self_positional=(), self_arg=SelfArgument(argument=Argument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None)), post_self_positional=(Argument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None),), pre_tensor_options_kwarg_only=(), tensor_options=None, post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={<Variant.method: 2>, <Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=7635), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=False, has_composite_implicit_autograd_kernel=True, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=False, has_composite_explicit_autograd_non_functional_kernel=False, tags=set()), outplace=None)

代表rand函數的四個元素如下。原先的八個函數依據有沒有out被整理成兩兩一對,共四對。

rand.generator_with_names & rand.generator_with_names_out
PythonSignatureGroup(signature=PythonSignature(name='rand', input_args=(PythonArgument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, default_init=None),), input_kwargs=(PythonArgument(name='generator', type=OptionalType(elem=BaseType(name=<BaseTy.Generator: 1>)), default=None, default_init=None), PythonArgument(name='names', type=OptionalType(elem=ListType(elem=BaseType(name=<BaseTy.Dimname: 5>), size=None)), default=None, default_init=None)), output_args=None, returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), tensor_options_args=(PythonArgument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', default_init=None), PythonArgument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', default_init=None), PythonArgument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', default_init='torch::tensors::get_default_device()'), PythonArgument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None), PythonArgument(name='requires_grad', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None)), method=False), base=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), overload_name='generator_with_names'), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='generator', type=OptionalType(elem=BaseType(name=<BaseTy.Generator: 1>)), default=None, annotation=None), Argument(name='names', type=OptionalType(elem=ListType(elem=BaseType(name=<BaseTy.Dimname: 5>), size=None)), default=None, annotation=None)), tensor_options=TensorOptionsArguments(dtype=Argument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', annotation=None), layout=Argument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', annotation=None), device=Argument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', annotation=None), pin_memory=Argument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='None', annotation=None)), post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=False, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4262), autogen=[OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), overload_name='generator_with_names_out')], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=True, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'}), outplace=None)
rand.generator & rand.generator_out
PythonSignatureGroup(signature=PythonSignature(name='rand', input_args=(PythonArgument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, default_init=None),), input_kwargs=(PythonArgument(name='generator', type=OptionalType(elem=BaseType(name=<BaseTy.Generator: 1>)), default=None, default_init=None),), output_args=PythonOutArgument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default='None', default_init=None, outputs=(PythonArgument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None),)), returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), tensor_options_args=(PythonArgument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', default_init=None), PythonArgument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', default_init=None), PythonArgument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', default_init='torch::tensors::get_default_device()'), PythonArgument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None), PythonArgument(name='requires_grad', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None)), method=False), base=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), 
overload_name='generator'), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='generator', type=OptionalType(elem=BaseType(name=<BaseTy.Generator: 1>)), default=None, annotation=None),), tensor_options=TensorOptionsArguments(dtype=Argument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', annotation=None), layout=Argument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', annotation=None), device=Argument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', annotation=None), pin_memory=Argument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='None', annotation=None)), post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.ExactSame: 1>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4275), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=True, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'}), outplace=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), 
overload_name='generator_out'), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='generator', type=OptionalType(elem=BaseType(name=<BaseTy.Generator: 1>)), default=None, annotation=None),), tensor_options=None, post_tensor_options_kwarg_only=(), out=(Argument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.ExactSame: 1>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4285), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=False, has_composite_implicit_autograd_kernel=True, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=False, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'}))
rand.names & rand.names_out
PythonSignatureGroup(signature=PythonSignature(name='rand', input_args=(PythonArgument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, default_init=None),), input_kwargs=(PythonArgument(name='names', type=OptionalType(elem=ListType(elem=BaseType(name=<BaseTy.Dimname: 5>), size=None)), default=None, default_init=None),), output_args=None, returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), tensor_options_args=(PythonArgument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', default_init=None), PythonArgument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', default_init=None), PythonArgument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', default_init='torch::tensors::get_default_device()'), PythonArgument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None), PythonArgument(name='requires_grad', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None)), method=False), base=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), 
overload_name='names'), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='names', type=OptionalType(elem=ListType(elem=BaseType(name=<BaseTy.Dimname: 5>), size=None)), default=None, annotation=None),), tensor_options=TensorOptionsArguments(dtype=Argument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', annotation=None), layout=Argument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', annotation=None), device=Argument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', annotation=None), pin_memory=Argument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='None', annotation=None)), post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=False, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4254), autogen=[OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), 
overload_name='names_out')], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=True, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'}), outplace=None)
rand & rand.out
PythonSignatureGroup(signature=PythonSignature(name='rand', input_args=(PythonArgument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, default_init=None),), input_kwargs=(), output_args=PythonOutArgument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default='None', default_init=None, outputs=(PythonArgument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None),)), returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), tensor_options_args=(PythonArgument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', default_init=None), PythonArgument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', default_init=None), PythonArgument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', default_init='torch::tensors::get_default_device()'), PythonArgument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None), PythonArgument(name='requires_grad', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None)), method=False), base=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), 
overload_name=''), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(), tensor_options=TensorOptionsArguments(dtype=Argument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', annotation=None), layout=Argument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', annotation=None), device=Argument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', annotation=None), pin_memory=Argument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='None', annotation=None)), post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.ExactSame: 1>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4270), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=True, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'}), outplace=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), 
overload_name='out'), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(), tensor_options=None, post_tensor_options_kwarg_only=(), out=(Argument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.ExactSame: 1>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4280), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=True, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'}))

上面共有四個PythonSignatureGroup元素,先來看第一個元素,其base成員的funcoverload_namegenerator_with_namesautogenoverload_name則為generator_with_names_out。第二個的則分別為generatorgenerator_out。第三個的分別為namesnames_out。第四個的分別為空字串out

到這裡得到了八個rand相關函數。

add.Tensor & add.out
PythonSignatureGroup(signature=PythonSignature(name='add', input_args=(PythonArgument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None), PythonArgument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None)), input_kwargs=(PythonArgument(name='alpha', type=BaseType(name=<BaseTy.Scalar: 12>), default='1', default_init=None),), output_args=PythonOutArgument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default='None', default_init=None, outputs=(PythonArgument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None),)), returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), tensor_options_args=(), method=False), base=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='add', inplace=False, dunder_method=False, functional_overload=False), overload_name='Tensor'), arguments=Arguments(pre_self_positional=(), self_arg=SelfArgument(argument=Argument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None)), post_self_positional=(Argument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='alpha', type=BaseType(name=<BaseTy.Scalar: 12>), default='1', annotation=None),), tensor_options=None, post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={<Variant.function: 1>, <Variant.method: 2>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=497), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=OperatorName(name=BaseOperatorName(base='add', inplace=False, dunder_method=False, functional_overload=False), overload_name='out'), structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=False, has_composite_explicit_autograd_non_functional_kernel=False, tags={'pointwise', 'canonical'}), outplace=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='add', inplace=False, dunder_method=False, functional_overload=False), overload_name='out'), arguments=Arguments(pre_self_positional=(), self_arg=SelfArgument(argument=Argument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None)), post_self_positional=(Argument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='alpha', type=BaseType(name=<BaseTy.Scalar: 12>), default='1', annotation=None),), tensor_options=None, post_tensor_options_kwarg_only=(), out=(Argument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=520), autogen=[], ufunc_inner_loop={<UfuncKey.Generic: 7>: UfuncInnerLoop(name='add', supported_dtypes=<torchgen.utils.OrderedSet object at 0x7f600cff7910>, ufunc_key=<UfuncKey.Generic: 7>), <UfuncKey.ScalarOnly: 6>: UfuncInnerLoop(name='add', supported_dtypes=<torchgen.utils.OrderedSet object at 0x7f600cff7b80>, ufunc_key=<UfuncKey.ScalarOnly: 6>)}, structured=True, structured_delegate=None, structured_inherits='TensorIteratorBase', precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=False, has_composite_explicit_autograd_non_functional_kernel=False, tags={'pointwise'}))
add & add.Tensor & add.out
PythonSignatureGroup(signature=PythonSignatureDeprecated(name='add', input_args=(PythonArgument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None), PythonArgument(name='alpha', type=BaseType(name=<BaseTy.Scalar: 12>), default=None, default_init=None), PythonArgument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None)), input_kwargs=(), output_args=PythonOutArgument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default='None', default_init=None, outputs=(PythonArgument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None),)), returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), tensor_options_args=(), method=False, deprecated_schema=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='add', inplace=False, dunder_method=False, functional_overload=False), overload_name=''), arguments=Arguments(pre_self_positional=(), self_arg=SelfArgument(argument=Argument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None)), post_self_positional=(Argument(name='alpha', type=BaseType(name=<BaseTy.Scalar: 12>), default=None, annotation=None), Argument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None)), pre_tensor_options_kwarg_only=(), tensor_options=None, post_tensor_options_kwarg_only=(), out=(Argument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), deprecated_args_exprs=('out', 'self', 'other', 'alpha')), base=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='add', inplace=False, dunder_method=False, functional_overload=False), overload_name='Tensor'), arguments=Arguments(pre_self_positional=(), self_arg=SelfArgument(argument=Argument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None)), post_self_positional=(Argument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='alpha', type=BaseType(name=<BaseTy.Scalar: 12>), default='1', annotation=None),), tensor_options=None, post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={<Variant.function: 1>, <Variant.method: 2>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=497), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=OperatorName(name=BaseOperatorName(base='add', inplace=False, dunder_method=False, functional_overload=False), overload_name='out'), structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=False, has_composite_explicit_autograd_non_functional_kernel=False, tags={'pointwise', 'canonical'}), outplace=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='add', inplace=False, dunder_method=False, functional_overload=False), overload_name='out'), arguments=Arguments(pre_self_positional=(), self_arg=SelfArgument(argument=Argument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None)), post_self_positional=(Argument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='alpha', type=BaseType(name=<BaseTy.Scalar: 12>), default='1', annotation=None),), tensor_options=None, post_tensor_options_kwarg_only=(), out=(Argument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=520), autogen=[], ufunc_inner_loop={<UfuncKey.Generic: 7>: UfuncInnerLoop(name='add', supported_dtypes=<torchgen.utils.OrderedSet object at 0x7f600cff7910>, ufunc_key=<UfuncKey.Generic: 7>), <UfuncKey.ScalarOnly: 6>: UfuncInnerLoop(name='add', supported_dtypes=<torchgen.utils.OrderedSet object at 0x7f600cff7b80>, ufunc_key=<UfuncKey.ScalarOnly: 6>)}, structured=True, structured_delegate=None, structured_inherits='TensorIteratorBase', precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=False, has_composite_explicit_autograd_non_functional_kernel=False, tags={'pointwise'}))

unsorted_function_hints

     for group in sorted(sig_groups, key=lambda g: g.signature.name):name = group.signature.nameunsorted_function_hints[name] += generate_type_hints(group)named_tuple = returns_named_tuple_pyi(group.signature)if named_tuple is not None and not group.signature.deprecated:# deprecated namedtuples are currently not included for torch functionstuple_name, tuple_def = named_tupleif tuple_name in namedtuples:assert namedtuples[tuple_name] == tuple_defelse:namedtuples[tuple_name] = tuple_def

unsorted_function_hints是一個defaultdict,key為函數名稱,value則為list of string。

rand

代表rand函數的元素如下:

'rand': ['def rand(size: Sequence[Union[_int, SymInt]], *, generator: Optional[Generator], names: Optional[Sequence[Union[str, ellipsis, None]]], dtype: Optional[_dtype]=None, layout: Optional[_layout]=None, device: Optional[Union[_device, str, None]]=None, pin_memory: Optional[_bool]=False, requires_grad: Optional[_bool]=False) -> Tensor: ...', 'def rand(*size: _int, generator: Optional[Generator], names: Optional[Sequence[Union[str, ellipsis, None]]], dtype: Optional[_dtype]=None, layout: Optional[_layout]=None, device: Optional[Union[_device, str, None]]=None, pin_memory: Optional[_bool]=False, requires_grad: Optional[_bool]=False) -> Tensor: ...', 'def rand(size: Sequence[Union[_int, SymInt]], *, generator: Optional[Generator], out: Optional[Tensor]=None, dtype: Optional[_dtype]=None, layout: Optional[_layout]=None, device: Optional[Union[_device, str, None]]=None, pin_memory: Optional[_bool]=False, requires_grad: Optional[_bool]=False) -> Tensor: ...', 'def rand(*size: _int, generator: Optional[Generator], out: Optional[Tensor]=None, dtype: Optional[_dtype]=None, layout: Optional[_layout]=None, device: Optional[Union[_device, str, None]]=None, pin_memory: Optional[_bool]=False, requires_grad: Optional[_bool]=False) -> Tensor: ...', 'def rand(size: Sequence[Union[_int, SymInt]], *, names: Optional[Sequence[Union[str, ellipsis, None]]], dtype: Optional[_dtype]=None, layout: Optional[_layout]=None, device: Optional[Union[_device, str, None]]=None, pin_memory: Optional[_bool]=False, requires_grad: Optional[_bool]=False) -> Tensor: ...', 'def rand(*size: _int, names: Optional[Sequence[Union[str, ellipsis, None]]], dtype: Optional[_dtype]=None, layout: Optional[_layout]=None, device: Optional[Union[_device, str, None]]=None, pin_memory: Optional[_bool]=False, requires_grad: Optional[_bool]=False) -> Tensor: ...', 'def rand(size: Sequence[Union[_int, SymInt]], *, out: Optional[Tensor]=None, dtype: Optional[_dtype]=None, layout: Optional[_layout]=None, device: Optional[Union[_device, str, None]]=None, pin_memory: Optional[_bool]=False, requires_grad: Optional[_bool]=False) -> Tensor: ...', 'def rand(*size: _int, out: Optional[Tensor]=None, dtype: Optional[_dtype]=None, layout: Optional[_layout]=None, device: Optional[Union[_device, str, None]]=None, pin_memory: Optional[_bool]=False, requires_grad: Optional[_bool]=False) -> Tensor: ...']

上面是八個overload的rand函數,可以將它們分為四組:有generator及names參數,只有generator參數,只有name參數,沒有generator和names參數。每組又可分為size參數是Sequence的及是int的。到這裡已經可以與torch/_C/_VariableFunctions.pyi一 一對應了。

add

找到名為add的key,其value list共有三個元素,分別對應add.Tensoradd_.Tensoradd.out

'def add(input: Union[Tensor, Number], other: Union[Tensor, Number], *, alpha: Optional[Number]=1, out: Optional[Tensor]=None) -> Tensor: ...'
'def add(self: Tensor, alpha: Number, other: Tensor) -> Tensor: ...'
'def add(self: Tensor, alpha: Number, other: Tensor, *, out: Tensor) -> Tensor: ...'

function_hints

    function_hints = []for name, hints in sorted(unsorted_function_hints.items()):if len(hints) > 1:hints = ["@overload\n" + h for h in hints]function_hints += hints

function_hints是一個list of string:

['@overload\ndef __and_...ensor: ...', '@overload\ndef __and_...ensor: ...', '@overload\ndef __lshi...ensor: ...', '@overload\ndef __lshi...ensor: ...', '@overload\ndef __or__...ensor: ...', '@overload\ndef __or__...ensor: ...', '@overload\ndef __rshi...ensor: ...', '@overload\ndef __rshi...ensor: ...', '@overload\ndef __xor_...ensor: ...', '@overload\ndef __xor_...ensor: ...', 'def _adaptive_avg_po...ensor: ...', 'def _adaptive_avg_po...ensor: ...', 'def _add_batch_dim(i...ensor: ...', '@overload\ndef _add_r...ensor: ...', ...]

其第零個元素如下:

'@overload\ndef __and__(input: Tensor, other: Tensor) -> Tensor: ...'
rand

代表rand函數的八個元素如下。其實跟unsorted_function_hints裡的大同小異,差別只在於前面多加了’@overload\n’。

'@overload\ndef rand(size: Sequence[Union[_int, SymInt]], *, generator: Optional[Generator], names: Optional[Sequence[Union[str, ellipsis, None]]], dtype: Optional[_dtype]=None, layout: Optional[_layout]=None, device: Optional[Union[_device, str, None]]=None, pin_memory: Optional[_bool]=False, requires_grad: Optional[_bool]=False) -> Tensor: ...'
'@overload\ndef rand(*size: _int, generator: Optional[Generator], names: Optional[Sequence[Union[str, ellipsis, None]]], dtype: Optional[_dtype]=None, layout: Optional[_layout]=None, device: Optional[Union[_device, str, None]]=None, pin_memory: Optional[_bool]=False, requires_grad: Optional[_bool]=False) -> Tensor: ...'
'@overload\ndef rand(size: Sequence[Union[_int, SymInt]], *, generator: Optional[Generator], out: Optional[Tensor]=None, dtype: Optional[_dtype]=None, layout: Optional[_layout]=None, device: Optional[Union[_device, str, None]]=None, pin_memory: Optional[_bool]=False, requires_grad: Optional[_bool]=False) -> Tensor: ...'
'@overload\ndef rand(*size: _int, generator: Optional[Generator], out: Optional[Tensor]=None, dtype: Optional[_dtype]=None, layout: Optional[_layout]=None, device: Optional[Union[_device, str, None]]=None, pin_memory: Optional[_bool]=False, requires_grad: Optional[_bool]=False) -> Tensor: ...'
'@overload\ndef rand(size: Sequence[Union[_int, SymInt]], *, names: Optional[Sequence[Union[str, ellipsis, None]]], dtype: Optional[_dtype]=None, layout: Optional[_layout]=None, device: Optional[Union[_device, str, None]]=None, pin_memory: Optional[_bool]=False, requires_grad: Optional[_bool]=False) -> Tensor: ...'
'@overload\ndef rand(*size: _int, names: Optional[Sequence[Union[str, ellipsis, None]]], dtype: Optional[_dtype]=None, layout: Optional[_layout]=None, device: Optional[Union[_device, str, None]]=None, pin_memory: Optional[_bool]=False, requires_grad: Optional[_bool]=False) -> Tensor: ...'
'@overload\ndef rand(size: Sequence[Union[_int, SymInt]], *, out: Optional[Tensor]=None, dtype: Optional[_dtype]=None, layout: Optional[_layout]=None, device: Optional[Union[_device, str, None]]=None, pin_memory: Optional[_bool]=False, requires_grad: Optional[_bool]=False) -> Tensor: ...'
'@overload\ndef rand(*size: _int, out: Optional[Tensor]=None, dtype: Optional[_dtype]=None, layout: Optional[_layout]=None, device: Optional[Union[_device, str, None]]=None, pin_memory: Optional[_bool]=False, requires_grad: Optional[_bool]=False) -> Tensor: ...'
add

代表add函數的三個元素如下:

'@overload\ndef add(input: Union[Tensor, Number], other: Union[Tensor, Number], *, alpha: Optional[Number]=1, out: Optional[Tensor]=None) -> Tensor: ...'
'@overload\ndef add(self: Tensor, alpha: Number, other: Tensor) -> Tensor: ...'
'@overload\ndef add(self: Tensor, alpha: Number, other: Tensor, *, out: Tensor) -> Tensor: ...'

hinted_function_names

    # Generate __all__ directive# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~# Include only the functions that contain hints, to prevent undefined# symbols to be included in the `__all__` directive.hinted_function_names = [name for name, hint in unsorted_function_hints.items() if hint]

hinted_function_names是一個list of string,看起來就只是一個有hint的函數名稱的列表:

['sparse_csr_tensor', '_sparse_csr_tensor_unsafe', 'sparse_csc_tensor', '_sparse_csc_tensor_unsafe', 'sparse_bsr_tensor', '_sparse_bsr_tensor_unsafe', 'sparse_bsc_tensor', '_sparse_bsc_tensor_unsafe', 'set_flush_denormal', 'get_default_dtype', 'asarray', 'from_numpy', 'frombuffer', 'numel', ...]

其中也包含了:

'rand', 'rand_like', 'randint_like', 'randn', 'randn_like', 'randperm'

'add'

all_symbols

    all_symbols = sorted(list(namedtuples.keys()) + hinted_function_names)

all_symbols

['__and__', '__lshift__', '__or__', '__rshift__', '__xor__', '_adaptive_avg_pool2d', '_adaptive_avg_pool3d', '_add_batch_dim', '_add_relu', '_add_relu_', '_addmm_activation', '_aminmax', '_amp_foreach_non_fin...d_unscale_', '_amp_update_scale_', ...]

其中也包含了:

'rand', 'rand_like', 'randint_like', 'randn', 'randn_like', 'randperm'

'add'

all_directive

接下來將all_symbols轉為string,用\n切割成多段個組成一個字串的列表,即為all_directive

    all_directive = pformat(all_symbols, width=100, compact=True).split("\n")all_directive[0] = "__all__ = {}".format(all_directive[0])

第零個元素如下:

"__all__ = ['__and__', '__lshift__', '__or__', '__rshift__', '__xor__', '_adaptive_avg_pool2d',"

其中包含add的元素如下:

" 'adaptive_max_pool1d', 'add', 'addbmm', 'addcdiv', 'addcmul', 'addmm', 'addmv', 'addmv_', 'addr',"

包含rand的元素如下:

" 'rad2deg_', 'rand', 'rand_like', 'randint', 'randint_like', 'randn', 'randn_like', 'randperm',"

最後一個元素如下:

" 'vsplit', 'vstack', 'where', 'xlogy', 'xlogy_', 'zero_', 'zeros', 'zeros_like']"

env

到這裡為止得到了function_hintsall_directive,這兩個變數與其它變數共同組成了env

    env = {"namedtuple_defs": namedtuple_defs,"function_hints": function_hints,"tensor_method_hints": tensor_method_hints,"legacy_class_hints": legacy_class_hints,"legacy_storage_base_hints": legacy_storage_base_hints,"dtype_class_hints": dtype_class_hints,"dispatch_key_hints": dispatch_key_hints,"all_directive": all_directive,}

運算後的env如下:

{"namedtuple_defs":["_fake_quantize_per_t... Tensor)])","_fused_moving_avg_ob... Tensor)])","_linalg_det = NamedT... Tensor)])","_linalg_eigh = Named... Tensor)])","_linalg_slogdet = Na... Tensor)])","_linalg_solve_ex = N... Tensor)])","_linalg_svd = NamedT... Tensor)])","_lu_with_info = Name... Tensor)])","_unpack_dual = Named... Tensor)])","..."],"function_hints":["@overload\ndef __and_...ensor: ...","@overload\ndef __and_...ensor: ...","@overload\ndef __lshi...ensor: ...","@overload\ndef __lshi...ensor: ...","@overload\ndef __or__...ensor: ...","@overload\ndef __or__...ensor: ...","@overload\ndef __rshi...ensor: ...","@overload\ndef __rshi...ensor: ...","@overload\ndef __xor_...ensor: ...","..."],"tensor_method_hints":["def __abs__(self) ->...ensor: ...","def __add__(self, ot...ensor: ...","@overload\ndef __and_...ensor: ...","@overload\ndef __and_...ensor: ...","@overload\ndef __and_...ensor: ...","def __bool__(self) -....bool: ...","def __complex__(self...mplex: ...","def __div__(self, ot...ensor: ...","def __eq__(self, oth...[override]","..."],"legacy_class_hints":["class DoubleTensor(T...nsor): ...","class FloatTensor(Tensor): ...","class LongTensor(Tensor): ...","class IntTensor(Tensor): ...","class ShortTensor(Tensor): ...","class HalfTensor(Tensor): ...","class CharTensor(Tensor): ...","class ByteTensor(Tensor): ...","class BoolTensor(Tensor): ..."],"legacy_storage_base_hints":["class StorageBase(object): ..."],"dtype_class_hints":["float32: dtype = ...","float: dtype = ...","float64: dtype = ...","double: dtype = ...","float16: dtype = ...","bfloat16: dtype = ...","half: dtype = ...","uint8: dtype = ...","int8: dtype = ...","..."],"dispatch_key_hints":["Undefined: DispatchKey = ...","FPGA: DispatchKey = ...","ORT: DispatchKey = ...","Vulkan: DispatchKey = ...","Metal: DispatchKey = ...","MKLDNN: DispatchKey = ...","OpenGL: DispatchKey = ...","OpenCL: DispatchKey = ...","IDEEP: DispatchKey = ...","..."],"all_directive":["__all__ = ['__and__...,"," ...,"," '_aminmax', ...,"," ...,"," '_cast_Float', ...,"," ...,"," ...,"," ...,"," '_convolution_mode...,","..."]
}

接著把env傳入FileManager的成員函數write_with_template

    # ...fm.write_with_template("torch/_C/__init__.pyi","torch/_C/__init__.pyi.in",lambda: {"generated_comment": "@" + "generated from torch/_C/__init__.pyi.in",**env,},)fm.write_with_template("torch/_C/_VariableFunctions.pyi","torch/_C/_VariableFunctions.pyi.in",lambda: {"generated_comment": "@"+ "generated from torch/_C/_VariableFunctions.pyi.in",**env,},)fm.write_with_template("torch/_VF.pyi","torch/_C/_VariableFunctions.pyi.in",lambda: {"generated_comment": "@"+ "generated from torch/_C/_VariableFunctions.pyi.in",**env,},)fm.write_with_template("torch/return_types.pyi","torch/_C/return_types.pyi.in",lambda: {"generated_comment": "@" + "generated from torch/_C/return_types.pyi",**env,},)gen_nn_functional(fm)

可以看到這段代碼裡調用了FileManagerwrite_with_templategen_nn_functional,以下先看gen_nn_functional

參考Unpacking Operators in Python的Merging Dictionaries章節,{"a": 1, **my_dict}這種寫法是先把my_dict拆開(unpacking),然後再與"a": 1共同構成一個新的字典。

lambda: {}這種寫法則表示一個無需參數並回傳一個字典的lambda函數。

注意到在呼叫write_with_template時,最後一個參數後面多了一個,。根據Should I add a trailing comma after the last argument in a function call? [closed],在呼叫函數時,如果參數是分多行寫的,比較建議的寫法是在最後加上一個,

回想一開始看到的,共會由六個pyi.in檔生成六個對應的pyi檔,此處只生成了四個pyi檔,剩下兩個(functional.pyi_nn.pyi)則是在gen_nn_functional中調用FileManager.write_with_template生成。

FileManager.write_with_template函數會以模板為基礎,按照替換函數所指定的方式,生成pyi檔,本函數已獨立成篇,詳見PyTorch檔案生成機制中的FileManager.write_with_template。

gen_nn_functional

gen_nn_functional函數同樣位於tools/pyi/gen_pyi.py,它的作用是由torch/nn/functional.pyi.intorch/_C/_nn.pyi.in生成torch/nn/functional.pyitorch/_C/_nn.pyi.in

def gen_nn_functional(fm: FileManager) -> None:# Functions imported into `torch.nn.functional` from `torch`, perhaps being filtered# through an `_add_docstr` callimports = ["conv1d","conv2d","conv3d","conv_transpose1d","conv_transpose2d","conv_transpose3d","conv_tbc","avg_pool1d","relu_","selu_","celu_","rrelu_","pixel_shuffle","pixel_unshuffle","channel_shuffle","native_channel_shuffle","pdist","cosine_similarity",]# Functions generated by `torch._jit_internal.boolean_dispatch`dispatches = ["fractional_max_pool2d","fractional_max_pool3d","max_pool1d","max_pool2d","max_pool3d","adaptive_max_pool1d","adaptive_max_pool2d","adaptive_max_pool3d",]# Functions directly imported from `torch._C`from_c = ["avg_pool2d","avg_pool3d","hardtanh_","elu_","leaky_relu_","logsigmoid","softplus","softshrink","one_hot",]import_code = ["from .. import {0} as {0}".format(_) for _ in imports]# TODO make these types more precisedispatch_code = ["{}: Callable".format(_) for _ in (dispatches + from_c)]fm.write_with_template("torch/nn/functional.pyi","torch/nn/functional.pyi.in",lambda: {"imported_hints": import_code,"dispatched_hints": dispatch_code,},)# functional.pyi already contains the definitions for those functions# so, we don't export then to itfrom_c.extend(["hardtanh", "leaky_relu", "hardsigmoid"])dispatch_code = ["{}: Callable".format(_) for _ in (dispatches + from_c)]fm.write_with_template("torch/_C/_nn.pyi","torch/_C/_nn.pyi.in",lambda: {"imported_hints": import_code,"dispatched_hints": dispatch_code,},)

可以看到這個函數最後也是調用FileManagerwrite_with_template生成.pyi檔。

write_with_template已獨立成篇,詳見PyTorch檔案生成機制中的FileManager.write_with_template。

datapipe.pyi

回頭看CMakeLists.txt

file(GLOB_RECURSE datapipe_files "${TORCH_SRC_DIR}/utils/data/datapipes/*.py")
add_custom_command(OUTPUT"${TORCH_SRC_DIR}/utils/data/datapipes/datapipe.pyi"COMMAND"${PYTHON_EXECUTABLE}" ${TORCH_SRC_DIR}/utils/data/datapipes/gen_pyi.pyDEPENDS"${TORCH_SRC_DIR}/utils/data/datapipes/datapipe.pyi.in"${datapipe_files}WORKING_DIRECTORY"${TORCH_ROOT}"
)

datapipe.pyi也是由類似的方式透過utils/data/datapipes/gen_pyi.pydatapipe.pyi.in生成的。

torch/utils/data/datapipes/datapipe.pyi.in中的注釋:

# This base template ("datapipe.pyi.in") is generated from mypy stubgen with minimal editing for code injection
# The output file will be "datapipe.pyi". This is executed as part of torch/CMakeLists.txt
# Note that, for mypy, .pyi file takes precedent over .py file, such that we must define the interface for other
# classes/objects here, even though we are not injecting extra code into them at the moment.

生成結果

torch/_C/_VariableFunctions.pyi.in為例:

  • generated_comment

    # ${generated_comment}
    

    被替換成:

    # @generated from torch/_C/_VariableFunctions.pyi.in
    
  • function_hints

    ${function_hints}
    

    被替換成:

    @overload
    def __and__(input: Tensor, other: Tensor) -> Tensor: ...
    # ...
    def zeros_like(input: Tensor, *, memory_format: Optional[memory_format] = None, dtype: Optional[_dtype] = None, layout: Optional[_layout] = None, device: Optional[Union[_device, str, None]] = None, pin_memory: Optional[_bool] = False, requires_grad: Optional[_bool] = False) -> Tensor: ...
    
  • all_directive

    ${all_directive}
    

    被替換成:

    __all__ = ['__and__', '__lshift__', '__or__', '__rshift__', '__xor__', '_adaptive_avg_pool2d',
    # ...'view_copy', 'vsplit', 'vstack', 'where', 'xlogy', 'xlogy_', 'zero_', 'zeros', 'zeros_like']
    

其餘部份皆與torch/_C/_VariableFunctions.pyi.in相同。

使用pyi做類型檢查

torch/__init__.py中有以下這麼一段:

# Appease the type checker: it can't deal with direct setting of globals().
# Note that we will see "too many" functions when reexporting this way; there
# is not a good way to fix this problem.  Perhaps, try to redesign VariableFunctions
# so that this import is good enough
if TYPE_CHECKING:# Some type signatures pulled in from _VariableFunctions here clash with# signatures already imported. For now these clashes are ignored; see# PR #43339 for details.from torch._C._VariableFunctions import *  # type: ignore[misc] # noqa: F403

也就是說,在類型檢查功能有被開啟的情況下,會引入torch._C.VariableFunctions中的所有東西。

其中torch._C._VariableFunctions指的就是我們剛剛看到的torch/_C/_VariableFunctions.pyi

根據pyi文件是干嘛的?(一文读懂Python的存根文件和类型检查),在py檔和pyi檔名稱相同且置於同一資料夾的情況下不需要import pyi檔類型檢查就會啟動。在此處是因為py檔和pyi檔名稱不同,所以才要手動import pyi?

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/85660.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

从零开始搭建java web springboot Eclipse MyBatis jsp mysql开发环境

文章目录 1 第一步软件安装1.1 下载并安装Eclipse1.2 下载并安装Java1.3 下载并安装Apache Maven1.4 下载并安装MySQL 2 创建所需要的表和数据3 创建Maven 工程、修改jdk4 通过pom.xml获取所需要的jar包5 安装Eclipse的MyBatis插件6 创建文件夹以及jsp文件7 创建下面各种java类…

MySQL集群高可用架构之MHA

MHA 一、MHA概述1.1 为什么要用MHA&#xff1f;1.2 什么是 MHA&#xff1f;1.3 MHA 的组成1.4 MHA 的特点1.5 故障切换备选主库的算法1.5 MHA工作原理 二、MySQL MHA高可用实例2.1 架构搭建部分1&#xff09;所有节点服务器安装MySQL2&#xff09;主从节点服务器添加域名映射3&…

爬虫获取接口数据

上一讲讲的是获取静态网页数据的教程&#xff0c;适用于我们要爬取的数据在网页源代码中出现&#xff0c;但是还是有很多的数据是源代码中没有的&#xff0c;需要通过接口访问服务器来获得&#xff0c;下面我就来讲讲如何爬取这类数据。 以巨潮资讯网爬取比亚迪企业年报为例。…

人工智能的前世今生与未来

人工智能的前世今生与未来 一、 什么是人工智能二、人工智能的前世三、人工智能的今生四、人工智能的未来 一、 什么是人工智能 人工智能&#xff08;Artificial Intelligence&#xff0c;简称AI&#xff09;是指一种模拟人类智能行为的科学与技术。 人工智能通过计算机系统进…

美团2024届秋招笔试第一场编程[汇总](上课口胡一下)

一.小美的好矩阵 口胡&#xff1a;模拟题&#xff0c;数据和题意灰常清楚。 俩层循环枚举每个3&#xfe61;3的小矩阵&#xff0c;然后枚举每个小矩阵&#xff0c;12个if判断俩俩相邻的字符是否相等。这里有个技巧&#xff1a;拿出中间的字符&#xff0c;这样就能使用一个偏移…

【操作系统笔记十五】操作系统面试问题总结

1. 进程和线程的区别&#xff1f; 调度&#xff1a;进程是资源管理和分配的基本单位&#xff0c;线程是 CPU 调度程序执行的基本单位。切换&#xff1a;线程切换比进程切换要快得多&#xff0c;进程切换需要进行CPU上下文切换&#xff0c;而线程不需要。拥有资源&#xff1a; …

记录:移动设备软件开发(Android项目组织结构)

目录 Android项目管理结构ui管理ViewGroupUI控制 使用Android Studio开发Android应用简单、方便&#xff0c;除了创建Android项目&#xff0c;开发者只需要做两件事情&#xff1a;使用activity_main.xml文件定义用户界面&#xff1a;打开Java源代码编写业务实现。但对于一个喜欢…

Vue3+Ts+Vite项目(第十五篇)——tailwindcss安装及使用详解,css原子化如何实现

文章目录 一、装包二、初始化2.1 终端执行如下命令2.2 postcss.config.js 文件中2.3 tailwind.config.js 文件中 三、样式文件3.1 新建 tailwind.css 文件3.2 main.ts 中引入 四、使用4.1 写入类名即可4.2 简单讲解 五、插件5.1 安装 Tailwind CSS IntelliSense5.2 使用效果 六…

亿某通电子文档安全管理系统任意文件上传漏洞 CNVD-2023-59471

目录 1.漏洞概述 2.影响版本 3.漏洞等级 4.漏洞复现 5.Nuclei自动化扫描POC 6.修复建议

MySQL数据库入门到精通2--基础篇(函数,约束,多表查询,事务)

3. 函数 函数 是指一段可以直接被另一段程序调用的程序或代码。MySQL中的函数主要分为以下四类&#xff1a; 字符串函数、数值函数、日期函数、流程函数。 3.1 字符串函数 MySQL中内置了很多字符串函数&#xff0c;常用的几个如下&#xff1a; 演示如下&#xff1a; A. con…

uniapp——实现base64格式二维码图片生成+保存二维码图片——基础积累

最近在做二维码推广功能&#xff0c;自从2020年下半年到今天&#xff0c;大概有三年没有用过uniapp了&#xff0c;而且我之前用uniapp开发的程序还比较少&#xff0c;因此很多功能都浪费了很多时间去查资料&#xff0c;现在把功能记录一下。 这里写目录标题 效果图1.base64生成…

主动写入流对@ResponseBody注解的影响 | 京东云技术团队

问题回溯 2023年Q2某日运营反馈一个问题&#xff0c;商品系统商家中心某批量工具模板无法下载&#xff0c;导致功能无法使用&#xff08;因为模板是动态变化的&#xff09; 商家中心报错&#xff08;JSON串&#xff09;&#xff1a; {"code":-1,"msg":&…

【湖科大教书匠】计算机网络随堂笔记第1章(计算机网络概述)

目录 1.1、计算机网络在信息时代的作用 我国互联网发展状况 1.2、因特网概述 1、网络、互连网&#xff08;互联网&#xff09;和因特网 2、因特网发展的三个阶段 因特网服务提供者ISP(Internet Service Provider) 基于ISP的三层结构的因特网 3、因特网的标准化工作 4、因特网的…

设计模式:桥接器模式(C++实现)

桥接器模式&#xff08;Bridge Pattern&#xff09;是一种结构设计模式&#xff0c;它将抽象部分与实现部分分离&#xff0c;使它们可以独立地变化。桥接器模式通常用于需要在多个维度上扩展和变化的情况下&#xff0c;将抽象和实现解耦。 以下是一个简单的C桥接器模式的示例&a…

在React中,什么是组件的状态(state)?如何更新组件的状态?

聚沙成塔每天进步一点点 ⭐ 专栏简介⭐ 创建和初始化状态⭐ 更新状态⭐ 注意事项⭐ 写在最后 ⭐ 专栏简介 前端入门之旅&#xff1a;探索Web开发的奇妙世界 欢迎来到前端入门之旅&#xff01;感兴趣的可以订阅本专栏哦&#xff01;这个专栏是为那些对Web开发感兴趣、刚刚踏入前…

力扣刷题笔记28——验证回文串/isalnum/逆序string

上一篇&#xff1a;力扣刷题笔记26——最小的k个数/快速排序学习/快排与冒泡的时间复杂度 文章目录 题目&#xff1a;我的方法&#xff1a;isalnum函数逆序string 题目&#xff1a; 如果在将所有大写字符转换为小写字符、并移除所有非字母数字字符之后&#xff0c;短语正着读和…

智能金融决策策略,规则引擎在大数据金融行业的实战案例

在金融风控场景中&#xff0c;规则引擎是一个核心风险管理的利器&#xff0c;它预先设定一系列规则设定&#xff0c;用于便捷的评估和处理各种交易、客户行为或其他需要自动化决策、计算、推理判断的情况。 以下是一个详细的示例&#xff0c;说明规则引擎在金融风控中的使用。 …

conda创建虚拟环境安装aix360

目录 创建虚拟环境查看已有虚拟环境进入所创建的虚拟环境查看已安装的程序查看已安装的python模块配置镜像pipconda 安装aix360将环境添加到jupyter删除虚拟环境 创建虚拟环境 conda create -n aix360 python3.9查看已有虚拟环境 conda env list进入所创建的虚拟环境 activa…

android13(T) SystemUI 运营商显示 bug 修复

aosp 本身 bug&#xff0c;开启状态栏显示运营商时&#xff0c;会有 npe 问题 frameworks/base/packages/SystemUI/src/com/android/systemui/util/CarrierConfigTracker.java -213,6 213,10 public class CarrierConfigTracker* param subId the subscription id for which …

【教程】视频汇聚/视频监控管理平台EasyCVR录像存储功能如何优化?具体步骤是什么?

视频云存储/安防监控EasyCVR视频汇聚平台基于云边端智能协同&#xff0c;支持海量视频的轻量化接入与汇聚、转码与处理、全网智能分发、视频集中存储等。视频监控系统EasyCVR拓展性强&#xff0c;视频能力丰富&#xff0c;具体可实现视频监控直播、视频轮播、视频录像、云存储、…