基于 Transformer 的中文对联生成器

✅作者简介:人工智能专业本科在读,喜欢计算机与编程,写博客记录自己的学习历程。
🍎个人主页:小嗷犬的个人主页
🍊个人网站:小嗷犬的技术小站
🥭个人信条:为天地立心,为生民立命,为往圣继绝学,为万世开太平。


本文目录

    • 简介(Introduction)
      • 项目结构(Structure)
    • 部署(Deployment)
      • 克隆项目(Clone Project)
      • 安装依赖(Requirements)
      • 训练模型(Train Model)
      • 启动 Web UI(Start Web UI)
    • 项目演示(Demo)
      • Web UI
      • 学习率变化(Learning Rate Schedule)
      • 训练历史(Training History)
    • 代码(Code)
      • 配置参数(Config)
      • 数据集(Dataset)
      • 模型(Model)
      • 工具(Utils)
      • 训练(Train)
      • Web UI


简介(Introduction)

本项目是一个基于 Transformer 的中文对联生成器,使用 PyTorch 构建模型,使用 Gradio 构建 Web UI。

数据集:https://www.kaggle.com/datasets/marquis03/chinese-couplets-dataset

GitHub 仓库:https://github.com/Marquis03/Chinese-Couplets-Generator-based-on-Transformer

Gitee 仓库:https://gitee.com/marquis03/Chinese-Couplets-Generator-based-on-Transformer

项目结构(Structure)

.
├── config
│   ├── __init__.py
│   └── config.py
├── data
│   ├── fixed_couplets_in.txt
│   └── fixed_couplets_out.txt
├── dataset
│   ├── __init__.py
│   └── dataset.py
├── img
│   ├── history.png
│   ├── lr_schedule.png
│   └── webui.gif
├── model
│   ├── __init__.py
│   └── model.py
├── trained
│   ├── vocab.pkl
│   └── CoupletsTransformer_best.pth
├── utils
│   ├── __init__.py
│   └── EarlyStopping.py
├── LICENSE
├── README.md
├── requirements.txt
├── train.py
└── webui.py

部署(Deployment)

克隆项目(Clone Project)

git clone https://github.com/Marquis03/Chinese-Couplets-Generator-based-on-Transformer.git
cd Chinese-Couplets-Generator-based-on-Transformer

安装依赖(Requirements)

pip install -r requirements.txt

训练模型(Train Model)

python train.py

Kaggle Notebook: https://www.kaggle.com/code/marquis03/chinese-couplets-generator-based-on-transformer

启动 Web UI(Start Web UI)

python webui.py

项目演示(Demo)

Web UI

Web UI

学习率变化(Learning Rate Schedule)

Learning Rate Schedule

训练历史(Training History)

Training History

代码(Code)

配置参数(Config)

该部分用于配置项目的参数,包括全局参数、路径参数、模型参数、训练参数和日志参数。

对应项目文件为 config/config.py

import os
import sys
import time
import torch
from loguru import loggerclass Config:def __init__(self):# globalself.seed = 0self.cuDNN = Trueself.debug = Falseself.num_workers = 0self.str_time = time.strftime("%Y-%m-%dT%H%M%S", time.localtime(time.time()))# pathself.project_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))self.dataset_dir = os.path.join(self.project_dir, "data")self.in_path = os.path.join(self.dataset_dir, "fixed_couplets_in.txt")self.out_path = os.path.join(self.dataset_dir, "fixed_couplets_out.txt")self.log_dir = os.path.join(self.project_dir, "logs")self.save_dir = os.path.join(self.log_dir, self.str_time)self.img_save_dir = os.path.join(self.save_dir, "images")self.model_save_dir = os.path.join(self.save_dir, "checkpoints")for path in (self.log_dir,self.save_dir,self.img_save_dir,self.model_save_dir,):if not os.path.exists(path):os.makedirs(path)# modelself.d_model = 256self.num_head = 8self.num_encoder_layers = 2self.num_decoder_layers = 2self.dim_feedforward = 1024self.dropout = 0.1# trainself.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")self.batch_size = 128self.val_ratio = 0.1self.epochs = 20self.warmup_ratio = 0.12self.lr_max = 1e-3self.lr_min = 1e-4self.beta1 = 0.9self.beta2 = 0.98self.epsilon = 10e-9self.weight_decay = 0.01self.early_stop = Trueself.patience = 4self.delta = 0# loglogger.remove()level_std = "DEBUG" if self.debug else "INFO"logger.add(sys.stdout,colorize=True,format="[<green>{time:YYYY-MM-DD HH:mm:ss,SSS}</green>|<level>{level: <8}</level>|<cyan>{name}</cyan>:<cyan>{function}</cyan>:<cyan>{line}</cyan>] >>> <level>{message}</level>",level=level_std,)logger.add(os.path.join(self.save_dir, f"{self.str_time}.log"),format="[{time:YYYY-MM-DD HH:mm:ss,SSS}|{level: <8}|{name}:{function}:{line}] >>> {message}",level="INFO",)logger.info("### Config:")for key, value in self.__dict__.items():logger.info(f"### {key:20} = {value}")

数据集(Dataset)

该部分用于定义词典、数据集以及相关函数,包括数据的加载、词典的构建、数据集的封装和数据集的加载器。

对应项目文件为 dataset/dataset.py

from collections import Counterimport torch
import numpy as np
from torch.utils.data import Dataset, DataLoaderdef load_data(filepaths, tokenizer=lambda s: s.strip().split()):raw_in_iter = iter(open(filepaths[0], encoding="utf8"))raw_out_iter = iter(open(filepaths[1], encoding="utf8"))return list(zip(map(tokenizer, raw_in_iter), map(tokenizer, raw_out_iter)))class Vocab(object):UNK = "<unk>"  # 0PAD = "<pad>"  # 1BOS = "<bos>"  # 2EOS = "<eos>"  # 3def __init__(self, data=None, min_freq=1):counter = Counter()for lines in data:counter.update(lines[0])counter.update(lines[1])self.word2idx = {Vocab.UNK: 0, Vocab.PAD: 1, Vocab.BOS: 2, Vocab.EOS: 3}self.idx2word = {0: Vocab.UNK, 1: Vocab.PAD, 2: Vocab.BOS, 3: Vocab.EOS}idx = 4for word, freq in counter.items():if freq >= min_freq:self.word2idx[word] = idxself.idx2word[idx] = wordidx += 1def __len__(self):return len(self.word2idx)def __getitem__(self, word):return self.word2idx.get(word, 0)def __call__(self, word):if not isinstance(word, (list, tuple)):return self[word]return [self[w] for w in word]def to_tokens(self, indices):if not isinstance(indices, (list, tuple, np.ndarray, torch.Tensor)):return self.idx2word[int(indices)]return [self.idx2word[int(i)] for i in indices]def pad_sequence(sequences, batch_first=False, padding_value=0):max_len = max([s.size(0) for s in sequences])out_tensors = []for tensor in sequences:padding_content = [padding_value] * (max_len - tensor.size(0))tensor = torch.cat([tensor, torch.tensor(padding_content)], dim=0)out_tensors.append(tensor)out_tensors = torch.stack(out_tensors, dim=1)if batch_first:out_tensors = out_tensors.transpose(0, 1)return out_tensors.long()class CoupletsDataset(Dataset):def __init__(self, data, vocab):self.data = dataself.vocab = vocabself.PAD_IDX = self.vocab[self.vocab.PAD]self.BOS_IDX = self.vocab[self.vocab.BOS]self.EOS_IDX = self.vocab[self.vocab.EOS]def __len__(self):return len(self.data)def __getitem__(self, index):raw_in, raw_out = self.data[index]in_tensor_ = torch.LongTensor(self.vocab(raw_in))out_tensor_ = torch.LongTensor(self.vocab(raw_out))return in_tensor_, out_tensor_def collate_fn(self, batch):in_batch, out_batch = [], []for in_, out_ in batch:in_batch.append(in_)out_ = torch.cat([torch.LongTensor([self.BOS_IDX]),out_,torch.LongTensor([self.EOS_IDX]),],dim=0,)out_batch.append(out_)in_batch = pad_sequence(in_batch, True, self.PAD_IDX)out_batch = pad_sequence(out_batch, True, self.PAD_IDX)return in_batch, out_batchdef get_loader(self, batch_size, shuffle=False, num_workers=0):return DataLoader(self,batch_size=batch_size,shuffle=shuffle,num_workers=num_workers,collate_fn=self.collate_fn,pin_memory=True,)

模型(Model)

该部分用于定义模型,包括 TokenEmbedding、PositionalEncoding 和 CoupletsTransformer。

对应项目文件为 model/model.py

import math
import torch
import torch.nn as nnclass TokenEmbedding(nn.Module):def __init__(self, vocab_size, emb_size):super(TokenEmbedding, self).__init__()self.embedding = nn.Embedding(vocab_size, emb_size)self.emb_size = emb_sizedef forward(self, tokens):return self.embedding(tokens) * math.sqrt(self.emb_size)class PositionalEncoding(nn.Module):def __init__(self, d_model, dropout=0.1, max_len=5000):super(PositionalEncoding, self).__init__()self.dropout = nn.Dropout(p=dropout)pe = torch.zeros(max_len, d_model)position = torch.arange(0, max_len, dtype=torch.float).unsqueeze(1)div_term = torch.exp(torch.arange(0, d_model, 2).float() * (-math.log(10000.0) / d_model))pe[:, 0::2] = torch.sin(position * div_term)pe[:, 1::2] = torch.cos(position * div_term)pe = pe.unsqueeze(0)self.register_buffer("pe", pe)def forward(self, x):x = x + self.pe[:, : x.size(1)]return self.dropout(x)class CoupletsTransformer(nn.Module):def __init__(self,vocab_size,d_model=512,nhead=8,num_encoder_layers=6,num_decoder_layers=6,dim_feedforward=2048,dropout=0.1,):super(CoupletsTransformer, self).__init__()self.name = "CoupletsTransformer"self.token_embedding = TokenEmbedding(vocab_size, d_model)self.pos_embedding = PositionalEncoding(d_model, dropout)self.transformer = nn.Transformer(d_model=d_model,nhead=nhead,num_encoder_layers=num_encoder_layers,num_decoder_layers=num_decoder_layers,dim_feedforward=dim_feedforward,dropout=dropout,batch_first=True,)self.fc = nn.Linear(d_model, vocab_size)self._reset_parameters()def _reset_parameters(self):for p in self.parameters():if p.dim() > 1:nn.init.xavier_uniform_(p)def forward(self, src, tgt, padding_value=0):src_embed = self.token_embedding(src)  # [batch_size, src_len, embed_dim]src_embed = self.pos_embedding(src_embed)  # [batch_size, src_len, embed_dim]tgt_embed = self.token_embedding(tgt)  # [batch_size, tgt_len, embed_dim]tgt_embed = self.pos_embedding(tgt_embed)  # [batch_size, tgt_len, embed_dim]tgt_mask = self.transformer.generate_square_subsequent_mask(tgt.size(-1)).to(tgt.device)src_key_padding_mask = src == padding_value  # [batch_size, src_len]tgt_key_padding_mask = tgt == padding_value  # [batch_size, tgt_len]outs = self.transformer(src=src_embed,tgt=tgt_embed,tgt_mask=tgt_mask,src_key_padding_mask=src_key_padding_mask,tgt_key_padding_mask=tgt_key_padding_mask,memory_key_padding_mask=src_key_padding_mask,)  # [batch_size, tgt_len, embed_dim]logits = self.fc(outs)  # [batch_size, tgt_len, vocab_size]return logitsdef encoder(self, src):src_embed = self.token_embedding(src)src_embed = self.pos_embedding(src_embed)memory = self.transformer.encoder(src_embed)return memorydef decoder(self, tgt, memory):tgt_embed = self.token_embedding(tgt)tgt_embed = self.pos_embedding(tgt_embed)outs = self.transformer.decoder(tgt_embed, memory=memory)return outsdef generate(self, text, vocab):self.eval()device = next(self.parameters()).devicemax_len = len(text)src = torch.LongTensor(vocab(list(text))).unsqueeze(0).to(device)memory = self.encoder(src)l_out = [vocab.BOS]for i in range(max_len):tgt = torch.LongTensor(vocab(l_out)).unsqueeze(0).to(device)outs = self.decoder(tgt, memory)prob = self.fc(outs[:, -1, :])next_token = vocab.to_tokens(prob.argmax(1).item())if next_token == vocab.EOS:breakl_out.append(next_token)return "".join(l_out[1:])

工具(Utils)

该部分用于定义工具函数,包括 EarlyStopping。

对应项目文件为 utils/EarlyStopping.py

class EarlyStopping(object):def __init__(self, patience=7, delta=0):self.patience = patienceself.counter = 0self.best_score = Noneself.early_stop = Falseself.val_loss_min = float("inf")self.delta = deltadef __call__(self, val_loss, model):score = -val_lossif self.best_score is None:self.best_score = scoreelif score < self.best_score + self.delta:self.counter += 1if self.counter >= self.patience:self.early_stop = Trueelse:self.best_score = scoreself.counter = 0

训练(Train)

该部分用于定义训练函数,包括训练、验证和保存模型。

对应项目文件为 train.py

import os
import gc
import time
import math
import random
import joblib
import warningswarnings.filterwarnings("ignore")import numpy as np
import pandas as pd
import seaborn as sns
from loguru import logger
from tqdm.auto import tqdm
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_splitsns.set_theme(style="darkgrid", font_scale=1.2, font="SimHei", rc={"axes.unicode_minus": False}
)import torch
from torch import nn, optim
from torch.optim.lr_scheduler import LambdaLRfrom config import Config
from model import CoupletsTransformer
from dataset import load_data, Vocab, CoupletsDataset
from utils import EarlyStoppingdef train_model(config, model, train_loader, val_loader, optimizer, criterion, scheduler
):model = model.to(config.device)best_loss = float("inf")history = []model_path = os.path.join(config.model_save_dir, f"{model.name}_best.pth")if config.early_stop:early_stopping = EarlyStopping(patience=config.patience, delta=config.delta)for epoch in tqdm(range(1, config.epochs + 1), desc=f"All"):train_loss = train_one_epoch(config, model, train_loader, optimizer, criterion, scheduler)val_loss = evaluate(config, model, val_loader, criterion)perplexity = math.exp(val_loss)history.append((epoch, train_loss, val_loss))msg = f"Epoch {epoch}/{config.epochs}, Train Loss: {train_loss:.6f}, Val Loss: {val_loss:.6f}, Perplexity: {perplexity:.4f}"logger.info(msg)if val_loss < best_loss:logger.info(f"Val loss decrease from {best_loss:>10.6f} to {val_loss:>10.6f}")torch.save(model.state_dict(), model_path)best_loss = val_lossif config.early_stop:early_stopping(val_loss, model)if early_stopping.early_stop:logger.info(f"Early stopping at epoch {epoch}")breaklogger.info(f"Save best model with val loss {best_loss:.6f} to {model_path}")model_path = os.path.join(config.model_save_dir, f"{model.name}_last.pth")torch.save(model.state_dict(), model_path)logger.info(f"Save last model with val loss {val_loss:.6f} to {model_path}")history = pd.DataFrame(history, columns=["Epoch", "Train Loss", "Val Loss"]).set_index("Epoch")history.plot(subplots=True, layout=(1, 2), sharey="row", figsize=(14, 6), marker="o", lw=2)history_path = os.path.join(config.img_save_dir, "history.png")plt.savefig(history_path, dpi=300)logger.info(f"Save history to {history_path}")def train_one_epoch(config, model, train_loader, optimizer, criterion, scheduler):model.train()train_loss = 0for src, tgt in tqdm(train_loader, desc=f"Epoch", leave=False):src, tgt = src.to(config.device), tgt.to(config.device)output = model(src, tgt[:, :-1], config.PAD_IDX)output = output.contiguous().view(-1, output.size(-1))tgt = tgt[:, 1:].contiguous().view(-1)loss = criterion(output, tgt)train_loss += loss.item()optimizer.zero_grad()loss.backward()optimizer.step()scheduler.step()return train_loss / len(train_loader)def evaluate(config, model, val_loader, criterion):model.eval()val_loss = 0with torch.no_grad():for src, tgt in tqdm(val_loader, desc=f"Val", leave=False):src, tgt = src.to(config.device), tgt.to(config.device)output = model(src, tgt[:, :-1], config.PAD_IDX)output = output.contiguous().view(-1, output.size(-1))tgt = tgt[:, 1:].contiguous().view(-1)loss = criterion(output, tgt)val_loss += loss.item()return val_loss / len(val_loader)def test_model(model, data, vocab):model.eval()for src_text, tgt_text in data:src_text, tgt_text = "".join(src_text), "".join(tgt_text)out_text = model.generate(src_text, vocab)logger.info(f"\nInput: {src_text}\nTarget: {tgt_text}\nOutput: {out_text}")def seed_everything(seed):os.environ["PYTHONHASHSEED"] = str(seed)random.seed(seed)np.random.seed(seed)torch.manual_seed(seed)torch.cuda.manual_seed_all(seed)def main():config = Config()# Set random seedseed_everything(config.seed)logger.info(f"Set random seed to {config.seed}")# Set cuDNNif config.cuDNN:torch.backends.cudnn.enabled = Truetorch.backends.cudnn.benchmark = Truetorch.backends.cudnn.deterministic = True# Load datadata = load_data([config.in_path, config.out_path])if config.debug:data = data[:1000]logger.info(f"Load {len(data)} couplets")# Build vocabvocab = Vocab(data)vocab_size = len(vocab)logger.info(f"Build vocab with {vocab_size} tokens")vocab_path = os.path.join(config.model_save_dir, "vocab.pkl")joblib.dump(vocab, vocab_path)logger.info(f"Save vocab to {vocab_path}")# Build datasetdata_train, data_val = train_test_split(data, test_size=config.val_ratio, random_state=config.seed, shuffle=True)train_dataset = CoupletsDataset(data_train, vocab)val_dataset = CoupletsDataset(data_val, vocab)config.PAD_IDX = train_dataset.PAD_IDXlogger.info(f"Build train dataset with {len(train_dataset)} samples")logger.info(f"Build val dataset with {len(val_dataset)} samples")# Build dataloadertrain_loader = train_dataset.get_loader(config.batch_size, shuffle=True, num_workers=config.num_workers)val_loader = val_dataset.get_loader(config.batch_size, shuffle=False, num_workers=config.num_workers)logger.info(f"Build train dataloader with {len(train_loader)} batches")logger.info(f"Build val dataloader with {len(val_loader)} batches")# Build modelmodel = CoupletsTransformer(vocab_size=vocab_size,d_model=config.d_model,nhead=config.num_head,num_encoder_layers=config.num_encoder_layers,num_decoder_layers=config.num_decoder_layers,dim_feedforward=config.dim_feedforward,dropout=config.dropout,)logger.info(f"Build model with {model.name}")# Build optimizeroptimizer = optim.AdamW(model.parameters(),lr=1,betas=(config.beta1, config.beta2),eps=config.epsilon,weight_decay=config.weight_decay,)# Build criterioncriterion = nn.CrossEntropyLoss(ignore_index=config.PAD_IDX, reduction="mean")# Build schedulerlr_max, lr_min = config.lr_max, config.lr_minT_max = config.epochs * len(train_loader)warm_up_iter = int(T_max * config.warmup_ratio)def WarmupExponentialLR(cur_iter):gamma = math.exp(math.log(lr_min / lr_max) / (T_max - warm_up_iter))if cur_iter < warm_up_iter:return (lr_max - lr_min) * (cur_iter / warm_up_iter) + lr_minelse:return lr_max * gamma ** (cur_iter - warm_up_iter)scheduler = LambdaLR(optimizer, lr_lambda=WarmupExponentialLR)df_lr = pd.DataFrame([WarmupExponentialLR(i) for i in range(T_max)],columns=["Learning Rate"],)plt.figure(figsize=(10, 6))sns.lineplot(data=df_lr, linewidth=2)plt.title("Learning Rate Schedule")plt.xlabel("Iteration")plt.ylabel("Learning Rate")lr_img_path = os.path.join(config.img_save_dir, "lr_schedule.png")plt.savefig(lr_img_path, dpi=300)logger.info(f"Save learning rate schedule to {lr_img_path}")# Garbage collectgc.collect()torch.cuda.empty_cache()# Train modeltrain_model(config, model, train_loader, val_loader, optimizer, criterion, scheduler)# Test modeltest_model(model, data_val[:10], vocab)if __name__ == "__main__":main()

Web UI

该部分用于定义 Web UI,包括输入、输出和启动 Web UI。

对应项目文件为 webui.py

import random
import joblibimport torch
import gradio as grfrom dataset import Vocab
from model import CoupletsTransformerdata_path = "./data/fixed_couplets_in.txt"
vocab_path = "./trained/vocab.pkl"
model_path = "./trained/CoupletsTransformer_best.pth"vocab = joblib.load(vocab_path)
vocab_size = len(vocab)device = torch.device("cuda" if torch.cuda.is_available() else "cpu")model = CoupletsTransformer(vocab_size,d_model=256,nhead=8,num_encoder_layers=2,num_decoder_layers=2,dim_feedforward=1024,dropout=0.1,
).to(device)
model.load_state_dict(torch.load(model_path))
model.eval()example = (line.replace(" ", "").strip() for line in iter(open(data_path, encoding="utf8"))
)
example = [line for line in example if len(line) > 5]example = random.sample(example, 300)def generate_couplet(vocab, model, src_text):if not src_text:return "上联不能为空"out_text = model.generate(src_text, vocab)return out_textinput_text = gr.Textbox(label="上联",placeholder="在这里输入上联",max_lines=1,lines=1,show_copy_button=True,autofocus=True,
)output_text = gr.Textbox(label="下联",placeholder="在这里生成下联",max_lines=1,lines=1,show_copy_button=True,
)demo = gr.Interface(fn=lambda x: generate_couplet(vocab, model, x),inputs=input_text,outputs=output_text,title="中文对联生成器",description="输入上联,生成下联",allow_flagging="never",submit_btn="生成下联",clear_btn="清空",examples=example,examples_per_page=50,
)demo.launch()

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/710048.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

javaWeb学习04

AOP核心概念: 连接点: JoinPoint, 可以被AOP控制的方法 通知: Advice 指哪些重复的逻辑&#xff0c;也就是共性功能(最终体现为一个方法) 切入点: PointCut, 匹配连接点的条件&#xff0c;通知仅会在切入点方法执行时被应用 目标对象: Target, 通知所应用的对象 通知类…

模糊PID控制算法实战讲解-案例温度控制(附C语言实现)

可结合之前的文章一起理解&#xff1a; 控制算法-PID算法总结-从公式原理到参数整定解析&#xff08;附C源码&#xff09;_pid自整定算法-CSDN博客 模糊控制算法实战讲解-案例温度控制&#xff08;附C语言实现&#xff09;-CSDN博客 目录 一、模糊PID控制的原理 1.1 模糊化…

Decision Transformer

DT个人理解 emmm, 这里的Transformer 就和最近接触到的whisper一样,比起传统Transformer,自己还设计了针对特殊情况的tokens。比如whisper里对SOT,起始时间,语言种类等都指定了特殊tokens去做Decoder的输入和输出。 DT这里的作为输入的Tokens由RL里喜闻乐见的历史数据:…

Java优先级队列--堆

目录 1. 优先级队列 1.1 概念 2.优先级队列的模拟实现 2.1 堆的概念 2.2 堆的存储方式 2.3 堆的创建 2.3.1 堆向下调整 2.3.2 堆的创建 2.3.3 建堆的时间复杂度 2.4 堆的插入与删除 2.4.1 堆的插入 2.4.2 堆的删除 2.5 用堆模拟实现优先级队列 3.常用接口介绍 3…

什么是VR紧急情况模拟|消防应急虚拟展馆|VR游戏体验馆加盟

VR紧急情况模拟是利用虚拟现实&#xff08;Virtual Reality&#xff0c;简称VR&#xff09;技术来模拟各种紧急情况和应急场景的训练和演练。通过VR技术&#xff0c;用户可以身临其境地体验各种紧急情况&#xff0c;如火灾、地震、交通事故等&#xff0c;以及应对这些紧急情况的…

贪心算法(算法竞赛、蓝桥杯)--修理牛棚

1、B站视频链接&#xff1a;A27 贪心算法 P1209 [USACO1.3] 修理牛棚_哔哩哔哩_bilibili 题目链接&#xff1a;[USACO1.3] 修理牛棚 Barn Repair - 洛谷 #include <bits/stdc.h> using namespace std; const int N205; int m,s,c,ans; int a[N];//牛的位置标号 int d[N…

奇怪的需求之与图片做交互

1.起因 客户想要展示自己的地图,该地图上有各种工作数据,和工作点位,已有的地图不能满足需求.于是提出将这张图片当成大背景 2.经过 鉴于文件格式和尺寸的原因,协商后客户提出将图片做成缩放效果,同时具有点击效果,原先直接进入的主页,现在为点击图片中的某条线路进入主页面…

[LeetCode]143.重排链表

143. 重排链表 - 力扣&#xff08;LeetCode&#xff09;https://leetcode.cn/problems/reorder-list/description/ 题目 示例 解题思路 寻找链表中点 链表逆序 合并链表 注意到目标链表即为将原链表的左半端和反转后的右半端合并后的结果。 这样我们的任务即可划分为三步&a…

Python中的os库

一.OS库简介 OS是Operating System的简写&#xff0c;即操作系统。 OS库是一个操作系统接口模块&#xff0c;提供一些方便使用操作系统相关功能的函数。 二.OS库常用函数 2.1文件和目录 2.1.1&#xff1a;os.getcwd() 作用&#xff1a;返回当前工作目录&#xff0c;结果是…

Python中re(正则)模块的使用

re 是 Python 标准库中的一个模块&#xff0c;用于支持正则表达式操作。通过 re 模块&#xff0c;可以使用各种正则表达式来搜索、匹配和操作字符串数据。 使用 re 模块可以帮助在处理字符串时进行高效的搜索和替换操作&#xff0c;特别适用于需要处理文本数据的情况。 # 导入…

如何在Window系统部署BUG管理软件并结合内网穿透实现远程管理本地BUG

文章目录 前言1. 本地安装配置BUG管理系统2. 内网穿透2.1 安装cpolar内网穿透2.2 创建隧道映射本地服务3. 测试公网远程访问4. 配置固定二级子域名4.1 保留一个二级子域名5.1 配置二级子域名6. 使用固定二级子域名远程 前言 BUG管理软件,作为软件测试工程师的必备工具之一。在…

微信小程序开启横屏调试

我们先打开小程序项目 开启真机运行 目前是一个竖屏的 然后打开全局配置文件 app.json 给下面的 window 对象 下面加一个 pageOrientation 属性 值为 landscape 运行结果如下 然后 我们开启真机运行 此时 就变成了个横屏的效果

Android、ios 打包一键生成自定义尺寸应用图标

这里以uniapp 打包为例 这里是打包时分别需要使用的尺寸 打开一键生成应用图标 在线工具 一键生成应用图标https://icon.wuruihong.com/ 上传你的应用图标 上传后在这里添加你需要用到的尺寸 开始生成 下载你批量生成的图标

使用Python,networkx绘制有向层级结构图

使用Python&#xff0c;networkx绘制有向层级结构图 1. 效果图2. 源码2.1 tree.txt2.2 pyNetworkx.py参考 上一篇介绍了&#xff1a;1. 使用Python&#xff0c;networkx对卡勒德胡赛尼三部曲之《群山回唱》人物关系图谱绘制 当前博客介绍&#xff1a; 2. 使用Python&#xff0c…

仿牛客网项目---社区首页的开发实现

从今天开始我们来写一个新项目&#xff0c;这个项目是一个完整的校园论坛的项目。主要功能模块&#xff1a;用户登录注册&#xff0c;帖子发布和热帖排行&#xff0c;点赞关注&#xff0c;发送私信&#xff0c;消息通知&#xff0c;社区搜索等。这篇文章我们先试着写一下用户的…

个人玩航拍,如何申请无人机空域?

我们在《年会不能停》一文中&#xff0c;有分享我们在西岭雪山用无人机拍摄的照片和视频&#xff0c;有兴趣可以去回顾。 春节的时候&#xff0c;趁着回老家一趟&#xff0c;又将无人机带了回去&#xff0c;计划拍一下老家的风景。 原本以为穷乡僻壤的地方可以随便飞&#xf…

Javaweb之SpringBootWeb案例之 SpringBoot原理的详细解析

3. SpringBoot原理 SpringBoot使我们能够集中精力地去关注业务功能的开发&#xff0c;而不用过多地关注框架本身的配置使用。而我们前面所讲解的都是面向应用层面的技术&#xff0c;接下来我们开始学习SpringBoot的原理&#xff0c;这部分内容偏向于底层的原理分析。 在剖析Sp…

Go开发 入门以VSCode为例

一、Go环境搭建 1.1 安装 进入Golang官网 https://go.dev&#xff0c;点击 Download 若无法打开网页可以使用国内的Go语言中文网 https://studygolang.com/dl 进入下载 找到合适的平台点击链接下载即可&#xff08;这里以Windows距离&#xff09; 下载完成后 Next Next 安…

threejs 大场景下,对小模型进行贴图处理

接上篇小模型的删除☞threeJS 大模型中对小模型进行删除-CSDN博客 针对已有模型&#xff0c;根据数据状态进行贴图处理&#xff0c;例如&#xff1a;机房内电脑告警状态、电脑开关机状态下的不同状态贴图等 示例模型还是以丛林小屋为例&#xff1a;针对该模型中的树干进行贴图…

Android进阶之路 - RecyclerView停止滑动后Item自动居中(SnapHelper辅助类)

之前一直没注意 SnapHelper 辅助类的功能&#xff0c;去年的时候看到项目中仅通过俩行代码设置 RecyclerView 后就提升了用户体验&#xff0c;觉得还是很有必要了解一下&#xff0c;尝试过后才发现其 PagerSnapHelper、LinearSnapHelper 子类可以作用于不同场景&#xff0c;且听…