cs224n作业4

NMT结构图:(具体结构图)
在这里插入图片描述
LSTM基础知识
nmt_model.py:
参考文章:LSTM输出结构描述

#!/usr/bin/env python3
# -*- coding: utf-8 -*-"""
CS224N 2020-21: Homework 4
nmt_model.py: NMT Model
Pencheng Yin <pcyin@cs.cmu.edu>
Sahil Chopra <schopra8@stanford.edu>
Vera Lin <veralin@stanford.edu>
"""
from collections import namedtuple
import sys
from typing import List, Tuple, Dict, Set, Union
import torch
import torch.nn as nn
import torch.nn.utils
import torch.nn.functional as F
from torch.nn.utils.rnn import pad_packed_sequence, pack_padded_sequencefrom model_embeddings import ModelEmbeddings
Hypothesis = namedtuple('Hypothesis', ['value', 'score'])class NMT(nn.Module):""" Simple Neural Machine Translation Model:- Bidrectional LSTM Encoder- Unidirection LSTM Decoder- Global Attention Model (Luong, et al. 2015)"""def __init__(self, embed_size, hidden_size, vocab, dropout_rate=0.2):""" Init NMT Model.@param embed_size (int): Embedding size (dimensionality)@param hidden_size (int): Hidden Size, the size of hidden states (dimensionality)@param vocab (Vocab): Vocabulary object containing src and tgt languagesSee vocab.py for documentation.@param dropout_rate (float): Dropout probability, for attention"""super(NMT, self).__init__()self.model_embeddings = ModelEmbeddings(embed_size, vocab)self.hidden_size = hidden_sizeself.dropout_rate = dropout_rateself.vocab = vocab# default valuesself.encoder = None self.decoder = Noneself.h_projection = Noneself.c_projection = Noneself.att_projection = Noneself.combined_output_projection = Noneself.target_vocab_projection = Noneself.dropout = None# For sanity check only, not relevant to implementationself.gen_sanity_check = Falseself.counter = 0### YOUR CODE HERE (~8 Lines)### TODO - Initialize the following variables:###     self.encoder (Bidirectional LSTM with bias)###     self.decoder (LSTM Cell with bias)###     self.h_projection (Linear Layer with no bias), called W_{h} in the PDF.###     self.c_projection (Linear Layer with no bias), called W_{c} in the PDF.###     self.att_projection (Linear Layer with no bias), called W_{attProj} in the PDF.###     self.combined_output_projection (Linear Layer with no bias), called W_{u} in the PDF.###     self.target_vocab_projection (Linear Layer with no bias), called W_{vocab} in the PDF.###     self.dropout (Dropout Layer)###### Use the following docs to properly initialize these variables:###     LSTM:###         https://pytorch.org/docs/stable/nn.html#torch.nn.LSTM###     LSTM Cell:###         https://pytorch.org/docs/stable/nn.html#torch.nn.LSTMCell###     Linear Layer:###         https://pytorch.org/docs/stable/nn.html#torch.nn.Linear###     Dropout Layer:###         https://pytorch.org/docs/stable/nn.html#torch.nn.Dropoutself.encoder = nn.LSTM(input_size=embed_size, hidden_size=hidden_size, bias=True, bidirectional=True)self.decoder = nn.LSTMCell(input_size=embed_size + hidden_size, hidden_size=hidden_size, bias=True)self.h_projection = nn.Linear(in_features=2 * hidden_size, out_features=hidden_size, bias=False)self.c_projection = nn.Linear(in_features=2 * hidden_size, out_features=hidden_size, bias=False)self.att_projection = nn.Linear(in_features=2 * hidden_size, out_features=hidden_size, bias=False)self.combined_output_projection = nn.Linear(in_features=3 * hidden_size, out_features=hidden_size, bias=False)self.target_vocab_projection = nn.Linear(in_features=hidden_size, out_features=len(self.vocab.tgt), bias=False)self.dropout = nn.Dropout(p=dropout_rate)### END YOUR CODEdef forward(self, source: List[List[str]], target: List[List[str]]) -> torch.Tensor:""" Take a mini-batch of source and target sentences, compute the log-likelihood oftarget sentences under the language models learned by the NMT system.@param source (List[List[str]]): list of source sentence tokens@param target (List[List[str]]): list of target sentence tokens, wrapped by `<s>` and `</s>`@returns scores (Tensor): a variable/tensor of shape (b, ) representing thelog-likelihood of generating the gold-standard target sentence foreach example in the input batch. Here b = batch size."""# Compute sentence lengthssource_lengths = [len(s) for s in source]# Convert list of lists into tensorssource_padded = self.vocab.src.to_input_tensor(source, device=self.device)   # Tensor: (src_len, b)target_padded = self.vocab.tgt.to_input_tensor(target, device=self.device)   # Tensor: (tgt_len, b)###     Run the network forward:###     1. Apply the encoder to `source_padded` by calling `self.encode()`###     2. Generate sentence masks for `source_padded` by calling `self.generate_sent_masks()`###     3. Apply the decoder to compute combined-output by calling `self.decode()`###     4. Compute log probability distribution over the target vocabulary using the###        combined_outputs returned by the `self.decode()` function.enc_hiddens, dec_init_state = self.encode(source_padded, source_lengths)enc_masks = self.generate_sent_masks(enc_hiddens, source_lengths)combined_outputs = self.decode(enc_hiddens, enc_masks, dec_init_state, target_padded)P = F.log_softmax(self.target_vocab_projection(combined_outputs), dim=-1)# Zero out, probabilities for which we have nothing in the target texttarget_masks = (target_padded != self.vocab.tgt['<pad>']).float()# Compute log probability of generating true target wordstarget_gold_words_log_prob = torch.gather(P, index=target_padded[1:].unsqueeze(-1), dim=-1).squeeze(-1) * target_masks[1:]scores = target_gold_words_log_prob.sum(dim=0)return scoresdef encode(self, source_padded: torch.Tensor, source_lengths: List[int]) -> Tuple[torch.Tensor, Tuple[torch.Tensor, torch.Tensor]]:""" Apply the encoder to source sentences to obtain encoder hidden states.Additionally, take the final states of the encoder and project them to obtain initial states for decoder.@param source_padded (Tensor): Tensor of padded source sentences with shape (src_len, b), whereb = batch_size, src_len = maximum source sentence length. Note that these have already been sorted in order of longest to shortest sentence.@param source_lengths (List[int]): List of actual lengths for each of the source sentences in the batch@returns enc_hiddens (Tensor): Tensor of hidden units with shape (b, src_len, h*2), whereb = batch size, src_len = maximum source sentence length, h = hidden size.@returns dec_init_state (tuple(Tensor, Tensor)): Tuple of tensors representing the decoder's initialhidden state and cell."""enc_hiddens, dec_init_state = None, None### YOUR CODE HERE (~ 8 Lines)### TODO:###     1. Construct Tensor `X` of source sentences with shape (src_len, b, e) using the source model embeddings.###         src_len = maximum source sentence length, b = batch size, e = embedding size. Note###         that there is no initial hidden state or cell for the decoder.###     2. Compute `enc_hiddens`, `last_hidden`, `last_cell` by applying the encoder to `X`.###         - Before you can apply the encoder, you need to apply the `pack_padded_sequence` function to X.###         - After you apply the encoder, you need to apply the `pad_packed_sequence` function to enc_hiddens.###         - Note that the shape of the tensor returned by the encoder is (src_len, b, h*2) and we want to###           return a tensor of shape (b, src_len, h*2) as `enc_hiddens`.###     3. Compute `dec_init_state` = (init_decoder_hidden, init_decoder_cell):###         - `init_decoder_hidden`:###             `last_hidden` is a tensor shape (2, b, h). The first dimension corresponds to forwards and backwards.###             Concatenate the forwards and backwards tensors to obtain a tensor shape (b, 2*h).###             Apply the h_projection layer to this in order to compute init_decoder_hidden.###             This is h_0^{dec} in the PDF. Here b = batch size, h = hidden size###         - `init_decoder_cell`:###             `last_cell` is a tensor shape (2, b, h). The first dimension corresponds to forwards and backwards.###             Concatenate the forwards and backwards tensors to obtain a tensor shape (b, 2*h).###             Apply the c_projection layer to this in order to compute init_decoder_cell.###             This is c_0^{dec} in the PDF. Here b = batch size, h = hidden size###### See the following docs, as you may need to use some of the following functions in your implementation:###     Pack the padded sequence X before passing to the encoder:###         https://pytorch.org/docs/stable/nn.html#torch.nn.utils.rnn.pack_padded_sequence###     Pad the packed sequence, enc_hiddens, returned by the encoder:###         https://pytorch.org/docs/stable/nn.html#torch.nn.utils.rnn.pad_packed_sequence###     Tensor Concatenation:###         https://pytorch.org/docs/stable/torch.html#torch.cat###     Tensor Permute:###         https://pytorch.org/docs/stable/tensors.html#torch.Tensor.permuteX = self.model_embeddings.source(source_padded)X = pack_padded_sequence(X, lengths=torch.tensor(source_lengths))enc_hiddens, (last_hidden, last_cell) = self.encoder(X)enc_hiddens = pad_packed_sequence(enc_hiddens, batch_first=True)[0]last_hidden = torch.cat((last_hidden[0], last_hidden[1]), dim=1)init_decoder_hidden = self.h_projection(last_hidden)last_cell = torch.cat((last_cell[0], last_cell[1]), dim=1)init_decoder_cell = self.c_projection(last_cell)dec_init_state = (init_decoder_hidden, init_decoder_cell)### END YOUR CODEreturn enc_hiddens, dec_init_statedef decode(self, enc_hiddens: torch.Tensor, enc_masks: torch.Tensor,  dec_init_state: Tuple[torch.Tensor, torch.Tensor], target_padded: torch.Tensor) -> torch.Tensor:  # Chop off the <END> token for max length sentences.  target_padded = target_padded[:-1]  # Initialize the decoder state (hidden and cell)  dec_state = dec_init_state  # Initialize previous combined output vector o_{t-1} as zero  batch_size = enc_hiddens.size(0)  o_prev = torch.zeros(batch_size, self.hidden_size, device=self.device)  # Initialize a list we will use to collect the combined output o_t on each step  combined_outputs = []  enc_hiddens_proj = self.att_projection(enc_hiddens)  Y = self.model_embeddings.target(target_padded)  for Y_t in torch.split(Y, split_size_or_sections=1, dim=0):  Y_t = torch.squeeze(Y_t, dim=0)  Ybar_t = torch.cat((Y_t, o_prev), dim=1)  next_dec_state, o_t, _ = self.step(Ybar_t, dec_state, enc_hiddens, enc_hiddens_proj, enc_masks)  combined_outputs.append(o_t)  o_prev = o_t  dec_state = next_dec_state  # Notice the corrected indentation here  combined_outputs = torch.stack(combined_outputs, dim=0)  ### END YOUR CODE  return combined_outputsdef step(self, Ybar_t: torch.Tensor,  dec_state: Tuple[torch.Tensor, torch.Tensor],  enc_hiddens: torch.Tensor,  enc_hiddens_proj: torch.Tensor,  enc_masks: torch.Tensor) -> Tuple[Tuple, torch.Tensor, torch.Tensor]:  combined_output = None  # Decode the input based on the decoder's current state  dec_state = self.decoder(Ybar_t, dec_state)  dec_hidden, dec_cell = dec_state  # Compute the attention scores  e_t = torch.bmm(input=torch.unsqueeze(dec_hidden, 1), mat2=enc_hiddens_proj.permute(0, 2, 1))  e_t = torch.squeeze(e_t, dim=1)  # Apply attention mask if necessary  if enc_masks is not None:  e_t.data.masked_fill_(enc_masks.bool(), -float('inf'))  # Compute the attention weights  alpha_t = F.softmax(e_t, dim=1)  alpha_t = torch.unsqueeze(alpha_t, dim=1)  # Compute the context vector  a_t = torch.bmm(input=alpha_t, mat2=enc_hiddens)  a_t = torch.squeeze(a_t, dim=1)  # Combine the context vector and the decoder's hidden state  u_t = torch.cat((a_t, dec_hidden), dim=1)  # Project the combined vector  v_t = self.combined_output_projection(u_t)  # Apply dropout and nonlinearity  O_t = self.dropout(torch.tanh(v_t))  # Assign the combined output  combined_output = O_t  # Return the updated decoder state, the combined output, and the attention scores  return dec_state, combined_output, e_tdef generate_sent_masks(self, enc_hiddens: torch.Tensor, source_lengths: List[int]) -> torch.Tensor:""" Generate sentence masks for encoder hidden states.@param enc_hiddens (Tensor): encodings of shape (b, src_len, 2*h), where b = batch size,src_len = max source length, h = hidden size. @param source_lengths (List[int]): List of actual lengths for each of the sentences in the batch.@returns enc_masks (Tensor): Tensor of sentence masks of shape (b, src_len),where src_len = max source length, h = hidden size."""enc_masks = torch.zeros(enc_hiddens.size(0), enc_hiddens.size(1), dtype=torch.float)for e_id, src_len in enumerate(source_lengths):enc_masks[e_id, src_len:] = 1return enc_masks.to(self.device)def beam_search(self, src_sent: List[str], beam_size: int=5, max_decoding_time_step: int=70) -> List[Hypothesis]:""" Given a single source sentence, perform beam search, yielding translations in the target language.@param src_sent (List[str]): a single source sentence (words)@param beam_size (int): beam size@param max_decoding_time_step (int): maximum number of time steps to unroll the decoding RNN@returns hypotheses (List[Hypothesis]): a list of hypothesis, each hypothesis has two fields:value: List[str]: the decoded target sentence, represented as a list of wordsscore: float: the log-likelihood of the target sentence"""src_sents_var = self.vocab.src.to_input_tensor([src_sent], self.device)src_encodings, dec_init_vec = self.encode(src_sents_var, [len(src_sent)])src_encodings_att_linear = self.att_projection(src_encodings)h_tm1 = dec_init_vecatt_tm1 = torch.zeros(1, self.hidden_size, device=self.device)eos_id = self.vocab.tgt['</s>']hypotheses = [['<s>']]hyp_scores = torch.zeros(len(hypotheses), dtype=torch.float, device=self.device)completed_hypotheses = []t = 0while len(completed_hypotheses) < beam_size and t < max_decoding_time_step:t += 1hyp_num = len(hypotheses)exp_src_encodings = src_encodings.expand(hyp_num,src_encodings.size(1),src_encodings.size(2))exp_src_encodings_att_linear = src_encodings_att_linear.expand(hyp_num,src_encodings_att_linear.size(1),src_encodings_att_linear.size(2))y_tm1 = torch.tensor([self.vocab.tgt[hyp[-1]] for hyp in hypotheses], dtype=torch.long, device=self.device)y_t_embed = self.model_embeddings.target(y_tm1)x = torch.cat([y_t_embed, att_tm1], dim=-1)(h_t, cell_t), att_t, _  = self.step(x, h_tm1,exp_src_encodings, exp_src_encodings_att_linear, enc_masks=None)# log probabilities over target wordslog_p_t = F.log_softmax(self.target_vocab_projection(att_t), dim=-1)live_hyp_num = beam_size - len(completed_hypotheses)contiuating_hyp_scores = (hyp_scores.unsqueeze(1).expand_as(log_p_t) + log_p_t).view(-1)top_cand_hyp_scores, top_cand_hyp_pos = torch.topk(contiuating_hyp_scores, k=live_hyp_num)prev_hyp_ids = top_cand_hyp_pos // len(self.vocab.tgt)hyp_word_ids = top_cand_hyp_pos % len(self.vocab.tgt)new_hypotheses = []live_hyp_ids = []new_hyp_scores = []for prev_hyp_id, hyp_word_id, cand_new_hyp_score in zip(prev_hyp_ids, hyp_word_ids, top_cand_hyp_scores):prev_hyp_id = prev_hyp_id.item()hyp_word_id = hyp_word_id.item()cand_new_hyp_score = cand_new_hyp_score.item()hyp_word = self.vocab.tgt.id2word[hyp_word_id]new_hyp_sent = hypotheses[prev_hyp_id] + [hyp_word]if hyp_word == '</s>':completed_hypotheses.append(Hypothesis(value=new_hyp_sent[1:-1],score=cand_new_hyp_score))else:new_hypotheses.append(new_hyp_sent)live_hyp_ids.append(prev_hyp_id)new_hyp_scores.append(cand_new_hyp_score)if len(completed_hypotheses) == beam_size:breaklive_hyp_ids = torch.tensor(live_hyp_ids, dtype=torch.long, device=self.device)h_tm1 = (h_t[live_hyp_ids], cell_t[live_hyp_ids])att_tm1 = att_t[live_hyp_ids]hypotheses = new_hypotheseshyp_scores = torch.tensor(new_hyp_scores, dtype=torch.float, device=self.device)if len(completed_hypotheses) == 0:completed_hypotheses.append(Hypothesis(value=hypotheses[0][1:],score=hyp_scores[0].item()))completed_hypotheses.sort(key=lambda hyp: hyp.score, reverse=True)return completed_hypotheses@propertydef device(self) -> torch.device:""" Determine which device to place the Tensors upon, CPU or GPU."""return self.model_embeddings.source.weight.device@staticmethoddef load(model_path: str):""" Load the model from a file.@param model_path (str): path to model"""params = torch.load(model_path, map_location=lambda storage, loc: storage)args = params['args']model = NMT(vocab=params['vocab'], **args)model.load_state_dict(params['state_dict'])return modeldef save(self, path: str):""" Save the odel to a file.@param path (str): path to the model"""print('save model parameters to [%s]' % path, file=sys.stderr)params = {'args': dict(embed_size=self.model_embeddings.embed_size, hidden_size=self.hidden_size, dropout_rate=self.dropout_rate),'vocab': self.vocab,'state_dict': self.state_dict()}torch.save(params, path)

model_embeddings.py:

#!/usr/bin/env python3
# -*- coding: utf-8 -*-import torch.nn as nn  class ModelEmbeddings(nn.Module):  def __init__(self, embed_size, vocab):  super(ModelEmbeddings, self).__init__()  self.embed_size = embed_size  # default values  self.source = None  self.target = None  src_pad_token_idx = vocab.src['<pad>']  tgt_pad_token_idx = vocab.tgt['<pad>']  self.source = nn.Embedding(num_embeddings=len(vocab.src),  embedding_dim=self.embed_size,  padding_idx=src_pad_token_idx)  self.target = nn.Embedding(num_embeddings=len(vocab.tgt),  embedding_dim=self.embed_size,  padding_idx=tgt_pad_token_idx)### END YOUR CODE

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/867480.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

雨量监测站:守护大地的晴雨表

雨量监测站是一种专门用于测量和记录降雨量的设施。它通常由雨量计、数据采集器、传输装置和数据处理系统组成。雨量计负责感应雨滴的接触&#xff0c;通过一定的机制将降雨量转化为电信号或数字信号。数据采集器则负责收集这些信号&#xff0c;并将其传输至数据处理系统进行分…

FastAPI+vue3+Primeflex教学20240706,渲染阶乘案例

子绝父相 相对定位是相对于自己原本的位置定位。 绝对定位&#xff0c;如果父元素设置了相对定位&#xff0c;则相对于父元素进行绝对定位&#xff0c;否则相对于最近的设置了相对定位的元素进行绝对定位&#xff0c;或者相对于根元素进行绝对定位。 定位有四个方向&#xff0…

38 IO流

目录 C语言的输入和输出流是什么CIO流stringstream的简单介绍 1. C语言的输入与输出 C语言中我们用到的最频繁的输出方式是scanf和printf&#xff0c;scanf&#xff1a;从标准输入设备&#xff08;键盘&#xff09;读取数据&#xff0c;并将值存在变量中。printf&#xff1a;…

高级计算机体系结构--期末教材复习

Chap2 性能评测和并行编程性能评测并行编程为什么需要三次 barrier改进方法 Chap3 互连网络交换和路由二维网格中 XY 路由 死锁、活锁及饿死死锁避免的方法&#xff1a;虚通道、转弯模型二维网格中最小 西向优先、北向最后和负向优先算法转弯模型&#xff1a;超立方体的部分自适…

安装 tesseract

安装 tesseract 1. Ubuntu-24.04 安装 tesseract2. Ubuntu-24.04 安装支持语言3. Windows 安装 tesseract4. Oracle Linux 8 安装 tesseract 1. Ubuntu-24.04 安装 tesseract sudo apt install tesseract-ocr sudo apt install libtesseract-devreference: https://tesseract-…

绝区贰--及时优化降低 LLM 成本和延迟

前言 大型语言模型 (LLM) 为各行各业带来了变革性功能&#xff0c;让用户能够利用尖端的自然语言处理技术处理各种应用。然而&#xff0c;这些强大的 AI 系统的便利性是有代价的 — 确实如此。随着 LLM 变得越来越普及&#xff0c;其计算成本和延迟可能会迅速增加&#xff0c;…

ctfshow web 36d 练手赛

不知所措.jpg 没啥用然后测试了网站可以使用php伪达到目的 ?filephp://filter/convert.base64-encode/resourcetest/../index.<?php error_reporting(0); $file$_GET[file]; $file$file.php; echo $file."<br />"; if(preg_match(/test/is,$file)){inclu…

如何处理 PostgreSQL 中由于表连接顺序不当导致的性能问题?

文章目录 一、理解表连接和连接顺序二、识别由于表连接顺序不当导致的性能问题三、影响表连接顺序的因素四、解决方案手动调整连接顺序创建合适的索引分析数据分布和优化查询逻辑 五、示例分析手动调整连接顺序创建索引优化查询逻辑 六、总结 在 PostgreSQL 中&#xff0c;表连…

论文回顾 | CVPR 2021 | How to Calibrate Your Event Camera | 基于图像重建的事件相机校准新方法

论文速览 | CVPR 2021 | How to Calibrate Your Event Camera | 基于图像重建的事件相机校准新方法 1 引言 在计算机视觉和机器人领域,相机校准一直是一个基础而又重要的问题。传统的相机校准方法主要依赖于从已知校准图案中提取角点,然后通过优化算法求解相机的内参和外参。这…

Vue表单输入绑定v-model

表单输入绑定 在前端处理表单时&#xff0c;我们常常需要将表单输入框的内容同步给Javascript中相应的变量。手动连接绑定和更改事件监听器可能会很麻&#xff0c;v-model 指令帮我们简化了这一步骤。 <template><h3>表单输入绑定</h3><hr> <inpu…

Ubuntu基本环境配置

#Jdk 安装 #--查看 已安装 的jdk软件 java -version # 安装jdk软件(如果有选择请选 y) sudo apt install openjdk-11-jdk # 自行学习 vi 或 vim 学习网址如下&#xff1a; # https://www.runoob.com/linux/linux-vim.html #-- 修改系统级 path : /etc/profile 文件 (注意要…

ElasticSearch 如何计算得分及一个不太成熟的使用

1.背景 最近在做 ES 相关东西&#xff0c;只最会在查询的时候给不同的字段设置不同的权重&#xff0c;但是得分具体怎么算的不太明白&#xff0c;花了4-5 天研究和总结了一下。这样不至于被别人问到“这个分数怎么算出来的&#xff1f;”&#xff0c;两眼一抹黑&#xff0c;不…

【vue组件库搭建05】vitePress中使用vue/antd/demo预览组件

一、vitepress使用vue及antd组件 1.安装antd之后在docs\.vitepress\theme\index.ts引入文件 // https://vitepress.dev/guide/custom-theme import { h } from vue import type { Theme } from vitepress import DefaultTheme from vitepress/theme import ./style.css impor…

Vue进阶(四十五)Jest集成指南

文章目录 一、前言二、环境检测三、集成问题汇总四、拓展阅读 一、前言 在前期博文《Vue进阶&#xff08;八十八&#xff09;Jest》中&#xff0c;讲解了Jest基本用法及应用示例。一切顺利的话&#xff0c;按照文档集成应用即可&#xff0c;但是集成过程中遇到的问题可能五花八…

基于Java的网上花店系统

目 录 1 网上花店商品销售网站概述 1.1 课题简介 1.2 设计目的 1.3 系统开发所采用的技术 1.4 系统功能模块 2 数据库设计 2.1 建立的数据库名称 2.2 所使用的表 3 网上花店商品销售网站设计与实现 1. 用户注册模块 2. 用户登录模块 3. 鲜花列表模块 4. 用户购物车…

【ARMv8/v9 GIC 系列 1.5 -- Enabling the distribution of interrupts】

请阅读【ARM GICv3/v4 实战学习 】 文章目录 Enabling the distribution of interruptsGIC Distributor 中断组分发控制CPU Interface 中断组分发控制Physical LPIs 的启用Summary Enabling the distribution of interrupts 在ARM GICv3和GICv4体系结构中&#xff0c;中断分发…

Windows上Docker的安装与初体验

Docker Desktop下载地址 国内下载地址 一、基本使用 1. 运行官方体验镜像 docker run -d -p 80:80 docker/getting-started执行成功 停止体验服务 docker stop docker/getting-started删除体验镜像 docker rmi docker/getting-started2. 修改docker镜像的存储位置 3. …

Django开发实战(1)- 认识django

1.django 使用MTV模式&#xff0c;其实与MVC本质一样&#xff1a; model&#xff1a;业务对象和关系映射&#xff08;ORM&#xff09; template&#xff1a;客户端页面展示 view&#xff1a;业务逻辑&#xff0c;根据需求调用 2.开发相关 √ python √ html&…

简单的手动实现spring中的自动装配案例

简简单单的实现一个spring中的自动装配和容器管理的小骚操作。 1&#xff0c;创建AutoSetBean.java 使用injectBeans静态方法&#xff0c;可以扫描指定包下的所有带MyInject注解的字段&#xff0c;如果在beans的Map中存在这个字段的实例化类&#xff0c;则执行装配。 import…

无人机企业需要什么资质?

无人机企业所需的资质主要可以分为几大类&#xff0c;以确保其合法、安全、高效地进行相关业务活动。以下是对这些资质的详细解释和归纳&#xff1a; 1. 基础企业资质&#xff1a; - 工商营业执照&#xff1a;这是企业合法经营的基本证书&#xff0c;所有企业都需要取得。无人…