图神经网络pytorch_geometric库之MessagePassing类

MessagePassing是图神经网络Python库pytorch_geometric(PyG)库里非常重要的一个基类,它可以用来创建消息传递图神经网络,pytorch_geometric里很多类比如图卷积层GCNConv和图注意力层GATConv都基于此类实现,我们也可以基于它来自定义图神经网络。

x i ( k − 1 ) ∈ R F \mathbf{x}_i^{(k-1)} \in \mathbb{R}^F xi(k1)RF是节点 i i i在第 ( k − 1 ) (k-1) (k1)网络层的节点特征, e j , i ∈ R D \mathbf{e}_{ j,i} \in \mathbb{R}^D ej,iRD是可选的从节点 j j j到节点 i i i的边特征,消息传递(message passing)图神经网络可以用下式来表示:
x i ( k ) = γ ( k ) ( x i ( k − 1 ) , ⨁ j ∈ N ( i ) ϕ ( k ) ( x i ( k − 1 ) , x j ( k − 1 ) , e j , i ) ) \mathbf{x}^{(k)}_i = \gamma^{(k)} \left( \mathbf{x}_i^{(k-1)}, \bigoplus_{j\in \mathcal{N}(i)} \phi^{(k)} \left(\mathbf{x}_i^{(k-1)}, \mathbf{x}_j^{(k-1)}, \mathbf{e}_{j,i} \right) \right) xi(k)=γ(k) xi(k1),jN(i)ϕ(k)(xi(k1),xj(k1),ej,i)
上式中的 ⨁ \bigoplus 是可微分的、具有置换不变性的函数如summeanmax,而 γ \gamma γ ϕ \phi ϕ则是MLPs之类的可微分函数。

pytorch_geometric的MessagePassing里,message()函数相当于实现函数 ϕ \phi ϕupdate()函数则是实现 γ \gamma γ,属性aggr来定义聚合schema ⨁ \bigoplus

  • MessagePassing(aggr="add", flow="source_to_target", node_dim=-2): 聚合schema可选值为"add", "mean""max",消息传递的方向取值有"source_to_target""target_to_source"node_dim用来表示在哪个轴传播。

  • MessagePassing.propagate(edge_index, size=None, **kwargs): 消息传播的入口函数,输入为edge indices和其他用于消息传递的数据,在PyG库里edge_index一般表示成形状为[2, num_edges]的类型为Long的COO形式,即相同索引位置的一对节点构成一条边。

  • MessagePassing.message(...): 进行消息传递的函数,其参数可以是任何经过propagate()传入的参数。tensor可以被添加后缀 _i_j ,比如 x_ix_j(一般 x_i指聚合信息的中心节点,x_j是对应的邻接节点)。

    x = ...           # Node features of shape [num_nodes, num_features]
    edge_index = ...  # Edge indices of shape [2, num_edges]# 对于flow="source_to_target",x_i,x_j的取值。是在MessagePassing的__collect__函数中赋值的
    x_j = x[edge_index[0]]  # Source node features [num_edges, num_features]
    x_i = x[edge_index[1]]  # Target node features [num_edges, num_features]
    edge_index_j = edge_index[0]
    edge_index_i = edge_index[1]
    
  • MessagePassing.update(aggr_out, ...): 为每个节点更新embedding,第一个参数是聚合输出。

pytorch_geometric的早期版本封装更少,更适合用来理解MessagePassing的主要逻辑,下面是1.5.0版本的源码。

# 1.5.0
import inspect
from collections import OrderedDictimport torch
from torch_sparse import SparseTensor
from torch_scatter import gather_csr, scatter, segment_csrmsg_aggr_special_args = set(['adj_t',
])msg_special_args = set(['edge_index_i','edge_index_j','size_i','size_j',
])aggr_special_args = set(['ptr','index','dim_size',
])update_special_args = set([])class MessagePassing(torch.nn.Module):r"""Base class for creating message passing layers of the form.. math::\mathbf{x}_i^{\prime} = \gamma_{\mathbf{\Theta}} \left( \mathbf{x}_i,\square_{j \in \mathcal{N}(i)} \, \phi_{\mathbf{\Theta}}\left(\mathbf{x}_i, \mathbf{x}_j,\mathbf{e}_{j,i}\right) \right),where :math:`\square` denotes a differentiable, permutation invariantfunction, *e.g.*, sum, mean or max, and :math:`\gamma_{\mathbf{\Theta}}`and :math:`\phi_{\mathbf{\Theta}}` denote differentiable functions such asMLPs.See `here <https://pytorch-geometric.readthedocs.io/en/latest/notes/create_gnn.html>`__ for the accompanying tutorial.Args:aggr (string, optional): The aggregation scheme to use(:obj:`"add"`, :obj:`"mean"`, :obj:`"max"` or :obj:`None`).(default: :obj:`"add"`)flow (string, optional): The flow direction of message passing(:obj:`"source_to_target"` or :obj:`"target_to_source"`).(default: :obj:`"source_to_target"`)node_dim (int, optional): The axis along which to propagate.(default: :obj:`0`)"""def __init__(self, aggr="add", flow="source_to_target", node_dim=0):super(MessagePassing, self).__init__()self.aggr = aggrassert self.aggr in ['add', 'mean', 'max', None]self.flow = flowassert self.flow in ['source_to_target', 'target_to_source']self.node_dim = node_dimassert self.node_dim >= 0self.__msg_aggr_params__ = inspect.signature(self.message_and_aggregate).parametersself.__msg_aggr_params__ = OrderedDict(self.__msg_aggr_params__)self.__msg_params__ = inspect.signature(self.message).parametersself.__msg_params__ = OrderedDict(self.__msg_params__)self.__aggr_params__ = inspect.signature(self.aggregate).parametersself.__aggr_params__ = OrderedDict(self.__aggr_params__)self.__aggr_params__.popitem(last=False)self.__update_params__ = inspect.signature(self.update).parametersself.__update_params__ = OrderedDict(self.__update_params__)self.__update_params__.popitem(last=False)msg_aggr_args = set(self.__msg_aggr_params__.keys()) - msg_aggr_special_argsmsg_args = set(self.__msg_params__.keys()) - msg_special_argsaggr_args = set(self.__aggr_params__.keys()) - aggr_special_argsupdate_args = set(self.__update_params__.keys()) - update_special_argsself.__user_args__ = set().union(msg_aggr_args, msg_args, aggr_args,update_args)self.__fuse__ = True# Support for GNNExplainer.self.__explain__ = Falseself.__edge_mask__ = Nonedef __get_mp_type__(self, edge_index):if (torch.is_tensor(edge_index) and edge_index.dtype == torch.longand edge_index.dim() == 2 and edge_index.size(0)):return 'edge_index'elif isinstance(edge_index, SparseTensor):return 'adj_t'else:return ValueError(('`MessagePassing.propagate` only supports `torch.LongTensor` ''of shape `[2, num_messages]` or `torch_sparse.SparseTensor` ''for argument :obj:`edge_index`.'))def __set_size__(self, size, idx, tensor):if not torch.is_tensor(tensor):passelif size[idx] is None:size[idx] = tensor.size(self.node_dim)elif size[idx] != tensor.size(self.node_dim):raise ValueError((f'Encountered node tensor with size 'f'{tensor.size(self.node_dim)} in dimension {self.node_dim}, 'f'but expected size {size[idx]}.'))def __collect__(self, edge_index, size, mp_type, kwargs):i, j = (0, 1) if self.flow == 'target_to_source' else (1, 0)ij = {'_i': i, '_j': j}out = {}for arg in self.__user_args__:if arg[-2:] not in ij.keys():out[arg] = kwargs.get(arg, inspect.Parameter.empty)else:idx = ij[arg[-2:]]data = kwargs.get(arg[:-2], inspect.Parameter.empty)if data is inspect.Parameter.empty:out[arg] = datacontinueif isinstance(data, tuple) or isinstance(data, list):assert len(data) == 2self.__set_size__(size, 1 - idx, data[1 - idx])data = data[idx]if not torch.is_tensor(data):out[arg] = datacontinueself.__set_size__(size, idx, data)if mp_type == 'edge_index':out[arg] = data.index_select(self.node_dim,edge_index[idx])elif mp_type == 'adj_t' and idx == 1:rowptr = edge_index.storage.rowptr()for _ in range(self.node_dim):rowptr = rowptr.unsqueeze(0)out[arg] = gather_csr(data, rowptr)elif mp_type == 'adj_t' and idx == 0:col = edge_index.storage.col()out[arg] = data.index_select(self.node_dim, col)size[0] = size[1] if size[0] is None else size[0]size[1] = size[0] if size[1] is None else size[1]if mp_type == 'edge_index':out['edge_index_j'] = edge_index[j]out['edge_index_i'] = edge_index[i]out['index'] = out['edge_index_i']elif mp_type == 'adj_t':out['adj_t'] = edge_indexout['edge_index_i'] = edge_index.storage.row()out['edge_index_j'] = edge_index.storage.col()out['index'] = edge_index.storage.row()out['ptr'] = edge_index.storage.rowptr()out['edge_attr'] = edge_index.storage.value()out['size_j'] = size[j]out['size_i'] = size[i]out['dim_size'] = out['size_i']return outdef __distribute__(self, params, kwargs):out = {}for key, param in params.items():data = kwargs.get(key, inspect.Parameter.empty)if data is inspect.Parameter.empty:if param.default is inspect.Parameter.empty:raise TypeError(f'Required parameter {key} is empty.')data = param.defaultout[key] = datareturn outdef propagate(self, edge_index, size=None, **kwargs):r"""The initial call to start propagating messages.Args:adj (Tensor or SparseTensor): A :obj:`torch.LongTensor` or a:obj:`torch_sparse.SparseTensor` that defines the underlyingmessage propagation.:obj:`edge_index` holds the indices of a general (sparse)assignment matrix of shape :obj:`[N, M]`.If :obj:`edge_index` is of type :obj:`torch.LongTensor`, itsshape must be defined as :obj:`[2, num_messages]`, wheremessages from nodes in :obj:`edge_index[0]` are sent tonodes in :obj:`edge_index[1]`(in case :obj:`flow="source_to_target"`).If :obj:`edge_index` is of type:obj:`torch_sparse.SparseTensor`, its sparse indices:obj:`(row, col)` should relate to :obj:`row = edge_index[1]`and :obj:`col = edge_index[0]`.Hence, the only difference between those formats is that weneed to input the *transposed* sparse adjacency matrix into:func:`propagate`.size (list or tuple, optional): The size :obj:`[N, M]` of theassignment matrix in case :obj:`edge_index` is a:obj:`LongTensor`.If set to :obj:`None`, the size will be automatically inferredand assumed to be quadratic.This argument is ignored in case :obj:`edge_index` is a:obj:`torch_sparse.SparseTensor`. (default: :obj:`None`)**kwargs: Any additional data which is needed to construct andaggregate messages, and to update node embeddings."""# We need to distinguish between the old `edge_index` format and the# new `torch_sparse.SparseTensor` format.mp_type = self.__get_mp_type__(edge_index)if mp_type == 'adj_t' and self.flow == 'target_to_source':raise ValueError(('Flow direction "target_to_source" is invalid for message ''propagation based on `torch_sparse.SparseTensor`. If you ''really want to make use of a reverse message passing flow, ''pass in the transposed sparse tensor to the message passing ''module, e.g., `adj.t()`.'))if mp_type == 'edge_index':if size is None:size = [None, None]elif isinstance(size, int):size = [size, size]elif torch.is_tensor(size):size = size.tolist()elif isinstance(size, tuple):size = list(size)elif mp_type == 'adj_t':size = list(edge_index.sparse_sizes())[::-1]assert isinstance(size, list)assert len(size) == 2# We collect all arguments used for message passing in `kwargs`.kwargs = self.__collect__(edge_index, size, mp_type, kwargs)# Try to run `message_and_aggregate` first and see if it succeeds:if mp_type == 'adj_t' and self.__fuse__ and not self.__explain__:msg_aggr_kwargs = self.__distribute__(self.__msg_aggr_params__,kwargs)out = self.message_and_aggregate(**msg_aggr_kwargs)if out == NotImplemented:self.__fuse__ = False# Otherwise, run both functions in separation.if mp_type == 'edge_index' or not self.__fuse__ or self.__explain__:msg_kwargs = self.__distribute__(self.__msg_params__, kwargs)out = self.message(**msg_kwargs)if self.__explain__:edge_mask = self.__edge_mask__.sigmoid()if out.size(0) != edge_mask.size(0):loop = edge_mask.new_ones(size[0])edge_mask = torch.cat([edge_mask, loop], dim=0)assert out.size(0) == edge_mask.size(0)out = out * edge_mask.view(-1, 1)aggr_kwargs = self.__distribute__(self.__aggr_params__, kwargs)out = self.aggregate(out, **aggr_kwargs)update_kwargs = self.__distribute__(self.__update_params__, kwargs)out = self.update(out, **update_kwargs)return outdef message(self, x_j):r"""Constructs messages from node :math:`j` to node :math:`i`in analogy to :math:`\phi_{\mathbf{\Theta}}` for each edge in:obj:`edge_index`.This function can take any argument as input which was initiallypassed to :meth:`propagate`.Furthermore, tensors passed to :meth:`propagate` can be mapped to therespective nodes :math:`i` and :math:`j` by appending :obj:`_i` or:obj:`_j` to the variable name, *.e.g.* :obj:`x_i` and :obj:`x_j`."""return x_jdef aggregate(self, inputs, index, ptr=None, dim_size=None):r"""Aggregates messages from neighbors as:math:`\square_{j \in \mathcal{N}(i)}`.Takes in the output of message computation as first argument and anyargument which was initially passed to :meth:`propagate`.By default, this function will delegate its call to scatter functionsthat support "add", "mean" and "max" operations as specified in:meth:`__init__` by the :obj:`aggr` argument."""if ptr is not None:for _ in range(self.node_dim):ptr = ptr.unsqueeze(0)return segment_csr(inputs, ptr, reduce=self.aggr)else:return scatter(inputs, index, dim=self.node_dim, dim_size=dim_size,reduce=self.aggr)def message_and_aggregate(self, adj_t):r"""Fuses computations of :func:`message` and :func:`aggregate` into asingle function.If applicable, this saves both time and memory since messages do notexplicitly need to be materialized.This function will only gets called in case it is implemented andpropagation takes place based on a :obj:`torch_sparse.SparseTensor`."""return NotImplementeddef update(self, inputs):r"""Updates node embeddings in analogy to:math:`\gamma_{\mathbf{\Theta}}` for each node:math:`i \in \mathcal{V}`.Takes in the output of aggregation as first argument and any argumentwhich was initially passed to :meth:`propagate`."""return inputs

在pytorch_geometric的教程中,示意了用MessagePassing来实现GCN层,其数学公式如下式,邻居节点特征矩阵经过权重矩阵 W \mathbf{W} W转换后用节点的度来进行归一化, b \mathbf{b} b是偏置向量。
x i ( k ) = ∑ j ∈ N ( i ) ∪ { i } 1 d e g ( i ) d e g ( j ) ⋅ ( W T ⋅ x j ( k − 1 ) ) + b \mathbf{x}^{(k)}_i = \sum_{j\in \mathcal{N}(i) \cup \{i\}} \frac{1}{\sqrt{deg(i)} \sqrt{deg(j)}} \cdot \left( \mathbf{W}^T \cdot \mathbf{x}_j^{(k-1)} \right) + \mathbf{b} xi(k)=jN(i){i}deg(i) deg(j) 1(WTxj(k1))+b
上述公式被划分为如下步骤:

  1. 在邻接矩阵中添加自环(self-loops)(如果邻接矩阵中本来就有自环,为了防止重复数据,需要先去除已有的自环,再添加,可以参考GATConv例子)。
  2. 对节点特征矩阵进行线性转换。
  3. 计算归一化系数。
  4. 归一化特征节点。
  5. 对所有邻居特征节点进行求和,即进行"add"聚合。
  6. 对结果应用偏置向量。

上述1-3步骤可以在消息传递之前进行,步骤4-5由消息传递函数处理,示意代码如下:

import torch
from torch.nn import Linear, Parameter
from torch_geometric.nn import MessagePassing
from torch_geometric.utils import add_self_loops, degreeclass GCNConv(MessagePassing):def __init__(self, in_channels, out_channels):super().__init__(aggr='add')  # "Add" aggregation (Step 5).self.lin = Linear(in_channels, out_channels, bias=False)self.bias = Parameter(torch.empty(out_channels))self.reset_parameters()def reset_parameters(self):self.lin.reset_parameters()self.bias.data.zero_()def forward(self, x, edge_index):# x has shape [N, in_channels]# edge_index has shape [2, E]# Step 1: Add self-loops to the adjacency matrix.edge_index, _ = add_self_loops(edge_index, num_nodes=x.size(0))# Step 2: Linearly transform node feature matrix.x = self.lin(x)# Step 3: Compute normalization.row, col = edge_indexdeg = degree(col, x.size(0), dtype=x.dtype)deg_inv_sqrt = deg.pow(-0.5)deg_inv_sqrt[deg_inv_sqrt == float('inf')] = 0norm = deg_inv_sqrt[row] * deg_inv_sqrt[col]# Step 4-5: Start propagating messages.out = self.propagate(edge_index, x=x, norm=norm)# Step 6: Apply a final bias vector.out = out + self.biasreturn outdef message(self, x_j, norm):# x_j has shape [E, out_channels]# Step 4: Normalize node features.return norm.view(-1, 1) * x_j

参考资料:

  1. https://pytorch-geometric.readthedocs.io/en/latest/tutorial/create_gnn.html
  2. https://github.com/pyg-team/pytorch_geometric/blob/1.5.0/torch_geometric/nn/conv/message_passing.py

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/diannao/28641.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

期末复习6--链表头插法(逆序)尾插法(顺序)---输出链表

头插法 #include <stdio.h> #include <stdlib.h>struct Node //定义结构体 {char data; //数据域struct Node * next; //指针域 };/* 请在这里填写答案 */void PrintList (struct Node * head) {struct Node * s;if(head NULL){printf("None&qu…

mybatisplus 笔记

int isDelete userRoleMapper.delete(new LambdaQueryWrapper<UserRole>().in(UserRole::getUserId, roleUserDTO.getUserId()).in(UserRole::getRoleId, roleUserDTO.getRoleId()));LambdaQueryWrapper<UserRole>: LambdaQueryWrapper 是 MyBatis Plus 提供的一个…

Apipost模拟HTTP客户端

模拟HTTP客户端的软件有很多&#xff0c;其中比较著名的就有API-FOX、POSTMAN。 相信很多小伙伴都使用POSTMAN。这篇博客主要介绍Apipost的原因是&#xff0c;Apipost无需下载&#xff0c;具有网页版。 APIFOX的站内下载&#xff1a; Api-Fox&#xff0c;类似于PostMan的软件…

时间复杂度和空间复杂度的深入解析

在算法和数据结构的学习中&#xff0c;时间复杂度和空间复杂度是两个至关重要的概念。它们分别用于衡量算法在执行过程中所需的计算资源&#xff08;时间&#xff09;和存储资源&#xff08;空间&#xff09;。以下&#xff0c;我们将从技术难点、面试官关注点、回答吸引力以及…

JavaFX 节点

JavaFX Node类javafx.scene.Node是添加到JavaFX 场景图的所有组件 的基类&#xff08;超类&#xff09; 。JavaFX Node 类是抽象的&#xff0c;因此你只需将 Node 类的子类添加到场景图中。场景图中的所有 JavaFX Node 实例共享一组由 JavaFX Node 类定义的公共属性。本 JavaFX…

毕节前端工程师前景怎么样:深入剖析与全面展望

毕节前端工程师前景怎么样&#xff1a;深入剖析与全面展望 在数字化浪潮的推动下&#xff0c;前端工程师作为连接技术与用户的桥梁&#xff0c;其职业前景备受关注。毕节地区的前端工程师同样面临着机遇与挑战并存的局面。那么&#xff0c;毕节前端工程师的前景究竟如何呢&…

【Ruby爬虫01】某吃瓜网站图片数据采集

介绍 由于最近在学习Ruby&#xff0c;写一个爬虫锻炼一下。涉及xml解析、多线程、xpath语法等基础知识。 实现代码 使用说明 使用前请先安装如下gem gem install nokogiri http openssl# nokogiri&#xff1a;一个解析xml和html的库&#xff0c;支持css、xpath语法 # htt…

一文了解Redis

一.什么是Redis 与MySQL一样&#xff0c;Redis也是客户端服务器结构的程序&#xff0c;是基于内存的键值对存储系统&#xff0c;属于NoSQL的一种。与很多键值对数据库不同的是&#xff0c;Redis 中的值可以是由 string&#xff08;字符串&#xff09;、hash&#xff08;哈希&a…

高速缓存是怎么让CPU找到地址内容的?

这个场景在性能优化下&#xff0c;应该很少会用到。但是还是总结一下。 Input: CPU给的一个地址&#xff0c;例如 0xffads1233423 Out: 这个地址上的值。 WORKFLOW CPU 问高速缓存&#xff0c;高速缓存会拿这个地址的中间几个位置&#xff0c;组成一个key高速缓存拿着这个ke…

数学中的虚数单位 i 和电学中的虚数单位 j

什么是虚数&#xff1f; 虚数是扩展实数概念的一类数&#xff0c;能够解决某些在实数范围内无法解决的问题。虚数的基本单位是 (i)&#xff0c;定义为&#xff1a; i − 1 i \sqrt{-1} i−1 ​ 这意味着 (i) 的平方是 -1&#xff1a; i 2 − 1 i^2 -1 i2−1 为什么需要虚…

【算法专题--链表】删除排序链表中的重复元素II -- 高频面试题(图文详解,小白一看就懂!!)

目录 一、前言 二、题目描述 三、解题方法 ⭐ 双指针 -- 采用 哨兵位头节点 &#x1f95d; 什么是哨兵位头节点&#xff1f; &#x1f34d; 解题思路 &#x1f34d; 案例图解 四、总结与提炼 五、共勉 一、前言 删除排序链表中的重复元素II元素这道题&#xff0c…

【JKI SMO】框架讲解(二)

JKI State Machine 讲解 将JKI State Machine 模板拖曳到程序框图中&#xff0c; 如下图&#xff0c; 此模板会默认放置一个OK按钮在前面板中&#xff0c;用于提示用户如何增加一个简单的用户事件去使用此框架。 “Event Structure”&#xff0c;Idle&#xff1a;此分支可以设…

【JS重点17】原型链(面试重点)

一&#xff1a;原型链底层原理 以下面一段代码为例&#xff0c;基于原型对象&#xff08;Star构造函数的原型对象&#xff09;的继承使得不同构造函数的原型对象关联在一起&#xff08;此处是最大的构造函数Object原型对象&#xff09;&#xff0c;并且这种关联的关系是一种链…

C#联合Halcon机器视觉框架源码—升级版

相较于之前的NxtVision&#xff0c;本软件代码架构更加合理&#xff0c;且新增ui设计器、原来的vb脚本改为C#脚本&#xff0c;并尝试将视觉与运动控制相结合&#xff0c;是一体化的框架。 对源码有需求的&#xff0c;订阅本专栏后&#xff0c;私信我领取。

活动集锦 | 英码科技积极参与行业盛会,AI赋能城市数字化转型

在当今数字经济时代&#xff0c;城市全域数字化转型已经成为提升城市管理效能、优化资源配置、推动经济发展的重要手段。英码科技始终致力于为企业打造高效、低成本的行业应用方案&#xff0c;助力企业实现数字化转型。近日&#xff0c;英码科技受邀参加了多场行业展示活动&…

华为OD刷题C卷 - 每日刷题 23(提取字符串中的最长表达式,模拟目录管理功能 - 完整实现)

1、提取字符串中的最长表达式 目标是从一个给定的字符串中提取出最长的合法简单数学表达式&#xff0c;并计算该表达式的值。如果存在多个同样长度的合法表达式&#xff0c;则选择第一个出现的表达式进行计算。 简单数学表达式的规则&#xff1a; 只包含0-9的数字和、-、*三种…

操作系统复习-线程同步

互斥量 两个线程的指令交叉执行互斥量可以保证先后执行称为原子性 原子性是指一系列操作不可被中断的特性这一系列操作要么全部执行完成&#xff0c;要么全部没有执行不存在部分执行部分未执行的情况 互斥锁 互斥量是最简单的线程同步的方法互斥锁&#xff0c;处于两态之一的…

01 飞行器设计 —— 一门独立的学科

01 飞行器设计 —— 一门独立的学科 01 引言02 飞机设计概述2-1 什么是飞机设计&#xff1f;2-1 飞机设计是从哪里开始的&#xff1f;2-2 如何成为一名飞机设计师&#xff1f;2-4 本书的组织 参考文献 说明&#xff1a;关于Raymer的《Aircraft Design》的读书笔记&#xff1b; …

解读ROS功能包模块的步骤

系列文章目录 提示:这里可以添加系列文章的所有文章的目录,目录需要自己手动添加 TODO:写完再整理 文章目录 系列文章目录前言解读ROS功能包模块的步骤前言 认知有限,望大家多多包涵,有什么问题也希望能够与大家多交流,共同成长! 推荐开发经验及方法博客专栏: [https:/…

哇塞,超好吃的麻辣片,一口就爱上

最近&#xff0c;我发现了一款让人欲罢不能的美食——食家巷麻辣片&#xff01;&#x1f60d; 一打开包装&#xff0c;那浓郁的麻辣香气就扑鼻而来&#xff0c;瞬间刺激着我的嗅觉神经。&#x1f603;食家巷麻辣片的外观色泽鲜艳&#xff0c;红通通的一片&#xff0c;看着就特…