NLP电影情绪分析项目

https://machinelearningmastery.com/develop-word-embedding-model-predicting-movie-review-sentiment/
https://machinelearningmastery.com/prepare-movie-review-data-sentiment-analysis/

本教程分为 5 个部分;他们是:

  1. 电影评论数据集
  2. 数据准备
  3. 训练嵌入层
  4. 训练 word2vec 嵌入
  5. 使用预训练嵌入

数据准备

1.将数据分为训练集和测试集。
2.加载和清理数据以删除标点符号和数字。
3.定义首选单词的词汇表。

在空白处拆分标记。
删除单词中的所有标点符号。
删除所有不完全由字母字符组成的单词。
删除所有已知停用词的单词。
删除长度为 <= 1 个字符的所有单词。

我们可以使用字符串 translate() 函数从标记中过滤掉标点符号。
我们可以通过对每个标记使用 isalpha() 检查来删除只是标点符号或包含数字的标记。
我们可以使用使用 NLTK 加载的列表删除英语停用词。
我们可以通过检查短标记的长度来过滤掉它们。

from string import punctuation
from os import listdir
from collections import Counter
from nltk.corpus import stopwords# load doc into memory
def load_doc(filename):# open the file as read onlyfile = open(filename, 'r')# read all texttext = file.read()# close the filefile.close()return text# turn a doc into clean tokens
def clean_doc(doc):# split into tokens by white spacetokens = doc.split()# remove punctuation from each tokentable = str.maketrans('', '', punctuation)tokens = [w.translate(table) for w in tokens]# remove remaining tokens that are not alphabetictokens = [word for word in tokens if word.isalpha()]# filter out stop wordsstop_words = set(stopwords.words('english'))tokens = [w for w in tokens if not w in stop_words]# filter out short tokenstokens = [word for word in tokens if len(word) > 1]return tokens# load doc and add to vocab
def add_doc_to_vocab(filename, vocab):# load docdoc = load_doc(filename)# clean doctokens = clean_doc(doc)# update countsvocab.update(tokens)# load all docs in a directory
def process_docs(directory, vocab, is_trian):# walk through all files in the folderfor filename in listdir(directory):# skip any reviews in the test setif is_trian and filename.startswith('cv9'):continueif not is_trian and not filename.startswith('cv9'):continue# create the full path of the file to openpath = directory + '/' + filename# add doc to vocabadd_doc_to_vocab(path, vocab)# define vocab
vocab = Counter()
# add all docs to vocab
process_docs('txt_sentoken/neg', vocab, True)
process_docs('txt_sentoken/pos', vocab, True)
# print the size of the vocab
print(len(vocab))
# print the top words in the vocab
print(vocab.most_common(50))

使用 Counter() 进行统计和去重

保存

# save list to file
def save_list(lines, filename):# convert lines to a single blob of textdata = '\n'.join(lines)# open filefile = open(filename, 'w')# write textfile.write(data)# close filefile.close()# save tokens to a vocabulary file
save_list(tokens, 'vocab.txt')

训练嵌入层

https://machinelearningmastery.com/what-are-word-embeddings/

from string import punctuation
from os import listdir
from numpy import array
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers import Embedding
from keras.layers.convolutional import Conv1D
from keras.layers.convolutional import MaxPooling1D# load doc into memory
def load_doc(filename):# open the file as read onlyfile = open(filename, 'r')# read all texttext = file.read()# close the filefile.close()return text# turn a doc into clean tokens
def clean_doc(doc, vocab):# split into tokens by white spacetokens = doc.split()# remove punctuation from each tokentable = str.maketrans('', '', punctuation)tokens = [w.translate(table) for w in tokens]# filter out tokens not in vocabtokens = [w for w in tokens if w in vocab]tokens = ' '.join(tokens)return tokens# load all docs in a directory
def process_docs(directory, vocab, is_trian):documents = list()# walk through all files in the folderfor filename in listdir(directory):# skip any reviews in the test setif is_trian and filename.startswith('cv9'):continueif not is_trian and not filename.startswith('cv9'):continue# create the full path of the file to openpath = directory + '/' + filename# load the docdoc = load_doc(path)# clean doctokens = clean_doc(doc, vocab)# add to listdocuments.append(tokens)return documents# load the vocabulary
vocab_filename = 'vocab.txt'
vocab = load_doc(vocab_filename)
vocab = vocab.split()
vocab = set(vocab)# load all training reviews
positive_docs = process_docs('txt_sentoken/pos', vocab, True)
negative_docs = process_docs('txt_sentoken/neg', vocab, True)
train_docs = negative_docs + positive_docs# create the tokenizer
tokenizer = Tokenizer()
# fit the tokenizer on the documents
tokenizer.fit_on_texts(train_docs)# sequence encode
encoded_docs = tokenizer.texts_to_sequences(train_docs)
# pad sequences
max_length = max([len(s.split()) for s in train_docs])
Xtrain = pad_sequences(encoded_docs, maxlen=max_length, padding='post')
# define training labels
ytrain = array([0 for _ in range(900)] + [1 for _ in range(900)])# load all test reviews
positive_docs = process_docs('txt_sentoken/pos', vocab, False)
negative_docs = process_docs('txt_sentoken/neg', vocab, False)
test_docs = negative_docs + positive_docs
# sequence encode
encoded_docs = tokenizer.texts_to_sequences(test_docs)
# pad sequences
Xtest = pad_sequences(encoded_docs, maxlen=max_length, padding='post')
# define test labels
ytest = array([0 for _ in range(100)] + [1 for _ in range(100)])# define vocabulary size (largest integer value)
vocab_size = len(tokenizer.word_index) + 1# define model
model = Sequential()
model.add(Embedding(vocab_size, 100, input_length=max_length))
model.add(Conv1D(filters=32, kernel_size=8, activation='relu'))
model.add(MaxPooling1D(pool_size=2))
model.add(Flatten())
model.add(Dense(10, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
print(model.summary())
# compile network
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# fit network
model.fit(Xtrain, ytrain, epochs=10, verbose=2)
# evaluate
loss, acc = model.evaluate(Xtest, ytest, verbose=0)
print('Test Accuracy: %f' % (acc*100))

训练 word2vec 嵌入

在这里插入图片描述

word2vec 算法逐句处理文档。这意味着我们将在清理过程中保留基于句子的结构。

from string import punctuation
from os import listdir
from gensim.models import Word2Vec# load doc into memory
def load_doc(filename):# open the file as read onlyfile = open(filename, 'r')# read all texttext = file.read()# close the filefile.close()return text# turn a doc into clean tokens
def doc_to_clean_lines(doc, vocab):clean_lines = list()lines = doc.splitlines()for line in lines:# split into tokens by white spacetokens = line.split()# remove punctuation from each tokentable = str.maketrans('', '', punctuation)tokens = [w.translate(table) for w in tokens]# filter out tokens not in vocabtokens = [w for w in tokens if w in vocab]clean_lines.append(tokens)return clean_lines# load all docs in a directory
def process_docs(directory, vocab, is_trian):lines = list()# walk through all files in the folderfor filename in listdir(directory):# skip any reviews in the test setif is_trian and filename.startswith('cv9'):continueif not is_trian and not filename.startswith('cv9'):continue# create the full path of the file to openpath = directory + '/' + filename# load and clean the docdoc = load_doc(path)doc_lines = doc_to_clean_lines(doc, vocab)# add lines to listlines += doc_linesreturn lines# load the vocabulary
vocab_filename = 'vocab.txt'
vocab = load_doc(vocab_filename)
vocab = vocab.split()
vocab = set(vocab)# load training data
positive_docs = process_docs('txt_sentoken/pos', vocab, True)
negative_docs = process_docs('txt_sentoken/neg', vocab, True)
sentences = negative_docs + positive_docs
print('Total training sentences: %d' % len(sentences))# train word2vec model
model = Word2Vec(sentences, size=100, window=5, workers=8, min_count=1)
# summarize vocabulary size in model
words = list(model.wv.vocab)
print('Vocabulary size: %d' % len(words))# save model in ASCII (word2vec) format
filename = 'embedding_word2vec.txt'
model.wv.save_word2vec_format(filename, binary=False)
from string import punctuation
from os import listdir
from numpy import array
from numpy import asarray
from numpy import zeros
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers import Embedding
from keras.layers.convolutional import Conv1D
from keras.layers.convolutional import MaxPooling1D# load doc into memory
def load_doc(filename):# open the file as read onlyfile = open(filename, 'r')# read all texttext = file.read()# close the filefile.close()return text# turn a doc into clean tokens
def clean_doc(doc, vocab):# split into tokens by white spacetokens = doc.split()# remove punctuation from each tokentable = str.maketrans('', '', punctuation)tokens = [w.translate(table) for w in tokens]# filter out tokens not in vocabtokens = [w for w in tokens if w in vocab]tokens = ' '.join(tokens)return tokens# load all docs in a directory
def process_docs(directory, vocab, is_trian):documents = list()# walk through all files in the folderfor filename in listdir(directory):# skip any reviews in the test setif is_trian and filename.startswith('cv9'):continueif not is_trian and not filename.startswith('cv9'):continue# create the full path of the file to openpath = directory + '/' + filename# load the docdoc = load_doc(path)# clean doctokens = clean_doc(doc, vocab)# add to listdocuments.append(tokens)return documents# load embedding as a dict
def load_embedding(filename):# load embedding into memory, skip first linefile = open(filename,'r')lines = file.readlines()[1:]file.close()# create a map of words to vectorsembedding = dict()for line in lines:parts = line.split()# key is string word, value is numpy array for vectorembedding[parts[0]] = asarray(parts[1:], dtype='float32')return embedding# create a weight matrix for the Embedding layer from a loaded embedding
def get_weight_matrix(embedding, vocab):# total vocabulary size plus 0 for unknown wordsvocab_size = len(vocab) + 1# define weight matrix dimensions with all 0weight_matrix = zeros((vocab_size, 100))# step vocab, store vectors using the Tokenizer's integer mappingfor word, i in vocab.items():weight_matrix[i] = embedding.get(word)return weight_matrix# load the vocabulary
vocab_filename = 'vocab.txt'
vocab = load_doc(vocab_filename)
vocab = vocab.split()
vocab = set(vocab)# load all training reviews
positive_docs = process_docs('txt_sentoken/pos', vocab, True)
negative_docs = process_docs('txt_sentoken/neg', vocab, True)
train_docs = negative_docs + positive_docs# create the tokenizer
tokenizer = Tokenizer()
# fit the tokenizer on the documents
tokenizer.fit_on_texts(train_docs)# sequence encode
encoded_docs = tokenizer.texts_to_sequences(train_docs)
# pad sequences
max_length = max([len(s.split()) for s in train_docs])
Xtrain = pad_sequences(encoded_docs, maxlen=max_length, padding='post')
# define training labels
ytrain = array([0 for _ in range(900)] + [1 for _ in range(900)])# load all test reviews
positive_docs = process_docs('txt_sentoken/pos', vocab, False)
negative_docs = process_docs('txt_sentoken/neg', vocab, False)
test_docs = negative_docs + positive_docs
# sequence encode
encoded_docs = tokenizer.texts_to_sequences(test_docs)
# pad sequences
Xtest = pad_sequences(encoded_docs, maxlen=max_length, padding='post')
# define test labels
ytest = array([0 for _ in range(100)] + [1 for _ in range(100)])# define vocabulary size (largest integer value)
vocab_size = len(tokenizer.word_index) + 1# load embedding from file
raw_embedding = load_embedding('embedding_word2vec.txt')
# get vectors in the right order
embedding_vectors = get_weight_matrix(raw_embedding, tokenizer.word_index)
# create the embedding layer
embedding_layer = Embedding(vocab_size, 100, weights=[embedding_vectors], input_length=max_length, trainable=False)# define model
model = Sequential()
model.add(embedding_layer)
model.add(Conv1D(filters=128, kernel_size=5, activation='relu'))
model.add(MaxPooling1D(pool_size=2))
model.add(Flatten())
model.add(Dense(1, activation='sigmoid'))
print(model.summary())
# compile network
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# fit network
model.fit(Xtrain, ytrain, epochs=10, verbose=2)
# evaluate
loss, acc = model.evaluate(Xtest, ytest, verbose=0)
print('Test Accuracy: %f' % (acc*100))

预训练嵌入

from string import punctuation
from os import listdir
from numpy import array
from numpy import asarray
from numpy import zeros
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers import Embedding
from keras.layers.convolutional import Conv1D
from keras.layers.convolutional import MaxPooling1D# load doc into memory
def load_doc(filename):# open the file as read onlyfile = open(filename, 'r')# read all texttext = file.read()# close the filefile.close()return text# turn a doc into clean tokens
def clean_doc(doc, vocab):# split into tokens by white spacetokens = doc.split()# remove punctuation from each tokentable = str.maketrans('', '', punctuation)tokens = [w.translate(table) for w in tokens]# filter out tokens not in vocabtokens = [w for w in tokens if w in vocab]tokens = ' '.join(tokens)return tokens# load all docs in a directory
def process_docs(directory, vocab, is_trian):documents = list()# walk through all files in the folderfor filename in listdir(directory):# skip any reviews in the test setif is_trian and filename.startswith('cv9'):continueif not is_trian and not filename.startswith('cv9'):continue# create the full path of the file to openpath = directory + '/' + filename# load the docdoc = load_doc(path)# clean doctokens = clean_doc(doc, vocab)# add to listdocuments.append(tokens)return documents# load embedding as a dict
def load_embedding(filename):# load embedding into memory, skip first linefile = open(filename,'r')lines = file.readlines()file.close()# create a map of words to vectorsembedding = dict()for line in lines:parts = line.split()# key is string word, value is numpy array for vectorembedding[parts[0]] = asarray(parts[1:], dtype='float32')return embedding# create a weight matrix for the Embedding layer from a loaded embedding
def get_weight_matrix(embedding, vocab):# total vocabulary size plus 0 for unknown wordsvocab_size = len(vocab) + 1# define weight matrix dimensions with all 0weight_matrix = zeros((vocab_size, 100))# step vocab, store vectors using the Tokenizer's integer mappingfor word, i in vocab.items():vector = embedding.get(word)if vector is not None:weight_matrix[i] = vectorreturn weight_matrix# load the vocabulary
vocab_filename = 'vocab.txt'
vocab = load_doc(vocab_filename)
vocab = vocab.split()
vocab = set(vocab)# load all training reviews
positive_docs = process_docs('txt_sentoken/pos', vocab, True)
negative_docs = process_docs('txt_sentoken/neg', vocab, True)
train_docs = negative_docs + positive_docs# create the tokenizer
tokenizer = Tokenizer()
# fit the tokenizer on the documents
tokenizer.fit_on_texts(train_docs)# sequence encode
encoded_docs = tokenizer.texts_to_sequences(train_docs)
# pad sequences
max_length = max([len(s.split()) for s in train_docs])
Xtrain = pad_sequences(encoded_docs, maxlen=max_length, padding='post')
# define training labels
ytrain = array([0 for _ in range(900)] + [1 for _ in range(900)])# load all test reviews
positive_docs = process_docs('txt_sentoken/pos', vocab, False)
negative_docs = process_docs('txt_sentoken/neg', vocab, False)
test_docs = negative_docs + positive_docs
# sequence encode
encoded_docs = tokenizer.texts_to_sequences(test_docs)
# pad sequences
Xtest = pad_sequences(encoded_docs, maxlen=max_length, padding='post')
# define test labels
ytest = array([0 for _ in range(100)] + [1 for _ in range(100)])# define vocabulary size (largest integer value)
vocab_size = len(tokenizer.word_index) + 1# load embedding from file
raw_embedding = load_embedding('glove.6B.100d.txt')
# get vectors in the right order
embedding_vectors = get_weight_matrix(raw_embedding, tokenizer.word_index)
# create the embedding layer
embedding_layer = Embedding(vocab_size, 100, weights=[embedding_vectors], input_length=max_length, trainable=False)# define model
model = Sequential()
model.add(embedding_layer)
model.add(Conv1D(filters=128, kernel_size=5, activation='relu'))
model.add(MaxPooling1D(pool_size=2))
model.add(Flatten())
model.add(Dense(1, activation='sigmoid'))
print(model.summary())
# compile network
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# fit network
model.fit(Xtrain, ytrain, epochs=10, verbose=2)
# evaluate
loss, acc = model.evaluate(Xtest, ytest, verbose=0)
print('Test Accuracy: %f' % (acc*100))

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/599167.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

恭喜 Databend 上榜 2023 开源创新榜「优秀开源项目 」

近日&#xff0c;国家科技传播中心见证了一场开源界的重要事件&#xff1a;由中国科协科学技术传播中心、中国计算机学会、中国通信学会和中国科学院软件研究所联合主办&#xff0c;CSDN 承办的 2023 年开源创新榜专家评审会圆满落幕。由王怀民院士担任评委会主任&#xff0c;评…

stable diffusion 人物高级提示词(四)朝向、画面范围、远近、焦距、机位、拍摄角度

一、朝向 英文中文front view正面Profile view / from side侧面half-front view半正面Back view背面(quarter front view:1.5)四分之一正面 prompt/英文中文翻译looking at the camera看向镜头facing the camera面对镜头turned towards the camera转向镜头looking away from …

uniapp中组件库的Checkbox 复选框 的丰富使用方法

目录 #平台差异说明 #基本使用 #自定义形状 #禁用checkbox #自定义形状 #自定义颜色 #横向排列形式 #横向两端排列形式 API #Checkbox Props #CheckboxGroup Props #CheckboxGroup Event 复选框组件一般用于需要多个选择的场景&#xff0c;该组件功能完整&#xff…

[Vulnhub靶机] DriftingBlues: 3

[Vulnhub靶机] DriftingBlues: 3靶机渗透思路及方法&#xff08;个人分享&#xff09; 靶机下载地址&#xff1a; https://download.vulnhub.com/driftingblues/driftingblues3.ova 靶机地址&#xff1a;192.168.67.19 攻击机地址&#xff1a;192.168.67.3 一、信息收集 1.…

simulink的模型搭建,使用状态机-最简单的教程二,状态机不显示它的logo,而显示的是模型缩略图

1&#xff0c;选择chart和graph fuction&#xff0c;然后选择子图化所选内容&#xff0c;就可以实现点击进去&#xff0c;再进行逻辑编写 2&#xff0c;选择外围的chart&#xff0c;然后选择并行&#xff0c;则里面包含的chart就变为虚线了 3&#xff0c;注意这里有一个内容预…

人生重开模拟器

前言&#xff1a; 人生重开模拟器是前段时间非常火的一个小游戏&#xff0c;接下来我们将一起学习使用c语言写一个简易版的人生重开模拟器。 网页版游戏&#xff1a; 人生重开模拟器 (ytecn.com) 1.实现一个简化版的人生重开模拟器 &#xff08;1&#xff09; 游戏开始的时…

Python基础篇: 环境安装

Python基础环境使用 一&#xff1a;运行环境Anaconda介绍1、Anaconda搭建1.1、下载方式1.2、安装1.3、验证是否安装成功 2、管理python环境2.1、列出所有环境2.2、创建环境2.3、进入指定虚拟环境2.4、离开虚拟环境2.5、删除虚拟环境 3、依赖管理3.1、安装依赖3.2、卸载依赖3.3、…

只有jar包如何调试修改JDK底层源码

背景 有时候在阅读JDK源码的时候&#xff0c;需要调试修改源码&#xff0c;但又只有jar包。这个时候我们可以借助JAVA的endorsed技术。在官方文档如下描述。 Specifying the -Djava.endorsed.dirslib/endorsed system property on the Java command line will force the JVM…

Python (十七) __name__ == ‘__main__‘ 作用

程序员的公众号&#xff1a;源1024&#xff0c;获取更多资料&#xff0c;无加密无套路&#xff01; 最近整理了一波电子书籍资料&#xff0c;包含《Effective Java中文版 第2版》《深入JAVA虚拟机》&#xff0c;《重构改善既有代码设计》&#xff0c;《MySQL高性能-第3版》&…

如何访问GitHub快的飞起?两步解决访问超时GitHub,无法访问GitHub的问题

1.查找国内访问比较快的IP 站长工具网址&#xff1a; https://tool.chinaz.com 测速链接: https://tool.chinaz.com/speedtest/github.com 输入 github.com 点击查看分析 往下滑动&#xff0c;找一个比较快的IP&#xff0c;然后去修改hosts配置文件 &#xff08;例如&#xff…

DZ-200系列中间继电器 板后不带底座 DZY-212X DC220V JOSEF约瑟

DZY-200系列中间继电器 系列型号&#xff1a; DZY-201中间继电器 DZY-222中间继电器 DZY-202中间继电器 DZY-203中间继电器 DZY-204中间继电器 DZY-205中间继电器 DZY-206中间继电器 DZY-207中间继电器 DZY-208中间继电器 DZY-209中间继电器 DZY-210中间继电器 DZY-211中间继电…

查看进程对应的路径查看端口号对应的进程ubuntu 安装ssh共享WiFi设置MyBatis 使用map类型作为参数,复杂查询(导出数据)

Linux 查询当前进程所在的路径 top 命令查询相应的进程号pid ps -ef |grep 进程名 lsof -I:端口号 netstat -anp|grep 端口号 cd /proc/进程id cwd 进程运行目录 exe 执行程序的绝对路径 cmdline 程序运行时输入的命令行命令 environ 记录了进程运行时的环境变量 fd 目录下是进…

物联网与低代码: 连接人与数字世界的无限可能

物联网(Internet of Things, IoT)和低代码开发平台的结合&#xff0c;为我们开启了连接物理和数字世界的新时代。通过低代码的简洁、高效的开发方式&#xff0c;我们能够更快速地构建智能化的物联网应用&#xff0c;实现智慧城市、智能家居、工业自动化等多个领域的创新和发展。…

vue无法获取dom

处理过程 watch监听值变化 index.js:33 [Vue warn]: Error in callback for watcher "$store.state.modelsStorageUrl": "TypeError: Cannot set properties of undefined (setting modelScene)"watch: {"$store.state.modelsStorageUrl":{ha…

法线贴图可以实现什么样的3D效果

在线工具推荐&#xff1a; 3D数字孪生场景编辑器 - GLTF/GLB材质纹理编辑器 - 3D模型在线转换 - Three.js AI自动纹理开发包 - YOLO 虚幻合成数据生成器 - 三维模型预览图生成器 - 3D模型语义搜索引擎 在 3D 建模中&#xff0c;曲面由多边形表示。照明计算是基于这些多边…

4《数据结构》

文章目录 绪论逻辑结构存储结构【物理结构】顺序和链式存储区别顺序表和数组区别数组和链表的区别链表结点概念链表为空条件链表文章http://t.csdnimg.cn/dssVK二叉树B树B树【MYSQL索引默认数据结构】B树和B树区别冒泡排序插排选排快排 绪论 数据结构&#xff1a;研究非数值计…

【计算机算法设计与分析】n皇后问题(C++_回溯法)

文章目录 题目描述测试样例算法原理算法实现参考资料 题目描述 在nxn格的棋盘上放置彼此不受攻击的n格皇后。按照国际象棋的规则&#xff0c;皇后可以攻击与之处在同一行或同一列或同一斜线上的棋子。n后问题等价于在nxn格的棋盘上放置n个皇后&#xff0c;任何2个皇后不放在同…

《掌握需求优先级排序,成功项目从此起步》

需求优先级排序是软件开发过程中至关重要的一环。通过合理的需求优先级排序&#xff0c;可以更好地把握项目进度&#xff0c;避免在后期因为需求的变更而造成项目延期或成本超支等问题。下面&#xff0c;本文将从需求的角度出发&#xff0c;探讨如何进行需求优先级排序。 一、…

国产服务器操作系统PXE安装脚本 可重复执行(rc08版本)

执行效果如下&#xff1a; #!/bin/bash #Date:2023/12/25 #Func:一键部署pxe服务器 #Author:Zhanghaodong #Version:2023.12.25.05 #Note:仅适用x86架构uefi安装 # 1.此脚本可多次重复执行。 # 2.如遇到某个服务异常退出&#xff0c;检查响应状态码排错后&#xff0c…

VINS-MONO拓展2----更快地makeHessian矩阵

1. 目标 完成大作业T2 作业提示&#xff1a; 多线程方法主要包括以下几种(参考博客)&#xff1a; MPI(多主机多线程开发),OpenMP(为单主机多线程开发而设计)SSE(主要增强CPU浮点运算的能力)CUDAStream processing, 之前已经了解过std::thread和pthread&#xff0c;拓展1…