【NLP的python库(03/4) 】: 全面概述

一、说明 

        Python 对自然语言处理库有丰富的支持。从文本处理、标记化文本并确定其引理开始,到句法分析、解析文本并分配句法角色,再到语义处理,例如识别命名实体、情感分析和文档分类,一切都由至少一个库提供。那么,你从哪里开始呢?

        本文的目标是为每个核心 NLP 任务提供相关 Python 库的概述。这些库通过简要说明进行了解释,并给出了 NLP 任务的具体代码片段。继续我对 NLP 博客文章的介绍,本文仅显示用于文本处理、句法和语义分析以及文档语义等核心 NLP 任务的库。此外,在 NLP 实用程序类别中,还提供了用于语料库管理和数据集的库。

        涵盖以下库:

  • NLTK
  • TextBlob
  • Spacy
  • SciKit Learn
  • Gensim 

二、核心自然语言处理任务

2.1 文本处理

任务:标记化、词形还原、词干提取、部分标记

NLTK 库为文本处理提供了一个完整的工具包,包括标记化、词干提取和词形还原。

from nltk.tokenize import sent_tokenize, word_tokenizeparagraph = '''Artificial intelligence was founded as an academic discipline in 1956, and in the years since it has experienced several waves of optimism, followed by disappointment and the loss of funding (known as an "AI winter"), followed by new approaches, success, and renewed funding. AI research has tried and discarded many different approaches, including simulating the brain, modeling human problem solving, formal logic, large databases of knowledge, and imitating animal behavior. In the first decades of the 21st century, highly mathematical and statistical machine learning has dominated the field, and this technique has proved highly successful, helping to solve many challenging problems throughout industry and academia.'''sentences = []
for sent in sent_tokenize(paragraph):sentences.append(word_tokenize(sent))sentences[0]
# ['Artificial', 'intelligence', 'was', 'founded', 'as', 'an', 'academic', 'discipline'

        使用 TextBlob,支持相同的文本处理任务。它与NLTK的区别在于更高级的语义结果和易于使用的数据结构:解析句子已经生成了丰富的语义信息。

from textblob import TextBlobtext = '''
Artificial intelligence was founded as an academic discipline in 1956, and in the years since it has experienced several waves of optimism, followed by disappointment and the loss of funding (known as an "AI winter"), followed by new approaches, success, and renewed funding. AI research has tried and discarded many different approaches, including simulating the brain, modeling human problem solving, formal logic, large databases of knowledge, and imitating animal behavior. In the first decades of the 21st century, highly mathematical and statistical machine learning has dominated the field, and this technique has proved highly successful, helping to solve many challenging problems throughout industry and academia.
'''blob = TextBlob(text)blob.ngrams()
#[WordList(['Artificial', 'intelligence', 'was']),
# WordList(['intelligence', 'was', 'founded']),
# WordList(['was', 'founded', 'as']),blob.tokens
# WordList(['Artificial', 'intelligence', 'was', 'founded', 'as', 'an', 'academic', 'discipline', 'in', '1956', ',', 'and', 'in',

        借助现代 NLP 库 Spacy,文本处理只是主要语义任务的丰富管道中的第一步。与其他库不同,它需要首先加载目标语言的模型。最近的模型不是启发式的,而是人工神经网络,尤其是变压器,它提供了更丰富的抽象,可以更好地与其他模型相结合。

import spacy
nlp = spacy.load('en_core_web_lg')text = '''
Artificial intelligence was founded as an academic discipline in 1956, and in the years since it has experienced several waves of optimism, followed by disappointment and the loss of funding (known as an "AI winter"), followed by new approaches, success, and renewed funding. AI research has tried and discarded many different approaches, including simulating the brain, modeling human problem solving, formal logic, large databases of knowledge, and imitating animal behavior. In the first decades of the 21st century, highly mathematical and statistical machine learning has dominated the field, and this technique has proved highly successful, helping to solve many challenging problems throughout industry and academia.
'''doc = nlp(text)
tokens = [token for token in doc]print(tokens)
# [Artificial, intelligence, was, founded, as, an, academic, discipline

2.2 文本语法

任务:解析、词性标记、名词短语提取

        从 NLTK 开始,支持所有语法任务。它们的输出作为 Python 原生数据结构提供,并且始终可以显示为简单的文本输出。

from nltk.tokenize import word_tokenize
from nltk import pos_tag, RegexpParser# Source: Wikipedia, Artificial Intelligence, https://en.wikipedia.org/wiki/Artificial_intelligence
text = '''
Artificial intelligence was founded as an academic discipline in 1956, and in the years since it has experienced several waves of optimism, followed by disappointment and the loss of funding (known as an "AI winter"), followed by new approaches, success, and renewed funding. AI research has tried and discarded many different approaches, including simulating the brain, modeling human problem solving, formal logic, large databases of knowledge, and imitating animal behavior. In the first decades of the 21st century, highly mathematical and statistical machine learning has dominated the field, and this technique has proved highly successful, helping to solve many challenging problems throughout industry and academia.
'''pos_tag(word_tokenize(text))
# [('Artificial', 'JJ'),
#  ('intelligence', 'NN'),
#  ('was', 'VBD'),
#  ('founded', 'VBN'),
#  ('as', 'IN'),
#  ('an', 'DT'),
#  ('academic', 'JJ'),
#  ('discipline', 'NN'),# noun chunk parser
# source: https://www.nltk.org/book_1ed/ch07.html
grammar = "NP: {<DT>?<JJ>*<NN>}"
parser = RegexpParser(grammar)parser.parse(pos_tag(word_tokenize(text)))
#(S
#  (NP Artificial/JJ intelligence/NN)
#  was/VBD
#  founded/VBN
#  as/IN
#  (NP an/DT academic/JJ discipline/NN)
#  in/IN
#  1956/CD

文本 Blob 在处理文本时立即提供 POS 标记。使用另一种方法,创建包含丰富语法信息的解析树。

from textblob import TextBlobtext = '''
Artificial intelligence was founded as an academic discipline in 1956, and in the years since it has experienced several waves of optimism, followed by disappointment and the loss of funding (known as an "AI winter"), followed by new approaches, success, and renewed funding. AI research has tried and discarded many different approaches, including simulating the brain, modeling human problem solving, formal logic, large databases of knowledge, and imitating animal behavior. In the first decades of the 21st century, highly mathematical and statistical machine learning has dominated the field, and this technique has proved highly successful, helping to solve many challenging problems throughout industry and academia.
'''blob = TextBlob(text)
blob.tags
#[('Artificial', 'JJ'),
# ('intelligence', 'NN'),
# ('was', 'VBD'),
# ('founded', 'VBN'),blob.parse()
# Artificial/JJ/B-NP/O
# intelligence/NN/I-NP/O
# was/VBD/B-VP/O
# founded/VBN/I-VP/O

Spacy 库使用转换器神经网络来支持其语法任务。

import spacy
nlp = spacy.load('en_core_web_lg')for token in nlp(text):print(f'{token.text:<20}{token.pos_:>5}{token.tag_:>5}')#Artificial            ADJ   JJ
#intelligence         NOUN   NN
#was                   AUX  VBD
#founded              VERB  VBN

2.3 文本语义

任务:命名实体识别、词义消歧、语义角色标记

语义分析是NLP方法开始不同的领域。使用 NLTK 时,生成的语法信息将在字典中查找以识别例如命名实体。因此,在处理较新的文本时,可能无法识别实体。

from nltk import download as nltk_download
from nltk.tokenize import word_tokenize
from nltk import pos_tag, ne_chunknltk_download('maxent_ne_chunker')
nltk_download('words')# Source: Wikipedia, Spacecraft, https://en.wikipedia.org/wiki/Spacecraft
text = '''
As of 2016, only three nations have flown crewed spacecraft: USSR/Russia, USA, and China. The first crewed spacecraft was Vostok 1, which carried Soviet cosmonaut Yuri Gagarin into space in 1961, and completed a full Earth orbit. There were five other crewed missions which used a Vostok spacecraft. The second crewed spacecraft was named Freedom 7, and it performed a sub-orbital spaceflight in 1961 carrying American astronaut Alan Shepard to an altitude of just over 187 kilometers (116 mi). There were five other crewed missions using Mercury spacecraft.
'''pos_tag(word_tokenize(text))
# [('Artificial', 'JJ'),
#  ('intelligence', 'NN'),
#  ('was', 'VBD'),
#  ('founded', 'VBN'),
#  ('as', 'IN'),
#  ('an', 'DT'),
#  ('academic', 'JJ'),
#  ('discipline', 'NN'),# noun chunk parser
# source: https://www.nltk.org/book_1ed/ch07.html
print(ne_chunk(pos_tag(word_tokenize(text))))
# (S
#   As/IN
#   of/IN
#   [...]
#   (ORGANIZATION USA/NNP)
#   [...]
#   which/WDT
#   carried/VBD
#   (GPE Soviet/JJ)
#   cosmonaut/NN
#   (PERSON Yuri/NNP Gagarin/NNP)

Spacy 库使用的转换器模型包含一个隐式的“时间戳”:它们的训练时间。这决定了模型使用了哪些文本,因此模型能够识别哪些文本。

import spacy
nlp = spacy.load('en_core_web_lg')text = '''
As of 2016, only three nations have flown crewed spacecraft: USSR/Russia, USA, and China. The first crewed spacecraft was Vostok 1, which carried Soviet cosmonaut Yuri Gagarin into space in 1961, and completed a full Earth orbit. There were five other crewed missions which used a Vostok spacecraft. The second crewed spacecraft was named Freedom 7, and it performed a sub-orbital spaceflight in 1961 carrying American astronaut Alan Shepard to an altitude of just over 187 kilometers (116 mi). There were five other crewed missions using Mercury spacecraft.
'''doc = nlp(paragraph)
for token in doc.ents:print(f'{token.text:<25}{token.label_:<15}')# 2016                   DATE
# only three             CARDINAL
# USSR                   GPE
# Russia                 GPE
# USA                    GPE
# China                  GPE
# first                  ORDINAL
# Vostok 1               PRODUCT
# Soviet                 NORP
# Yuri Gagarin           PERSON

2.4 文档语义

任务:文本分类、主题建模、情感分析、毒性识别

情感分析也是NLP方法差异不同的任务:在词典中查找单词含义与在单词或文档向量上编码的学习单词相似性。

TextBlob 具有内置的情感分析,可返回文本中的极性(整体正面或负面内涵)和主观性(个人意见的程度)。

from textblob import TextBlobtext = '''
Artificial intelligence was founded as an academic discipline in 1956, and in the years since it has experienced several waves of optimism, followed by disappointment and the loss of funding (known as an "AI winter"), followed by new approaches, success, and renewed funding. AI research has tried and discarded many different approaches, including simulating the brain, modeling human problem solving, formal logic, large databases of knowledge, and imitating animal behavior. In the first decades of the 21st century, highly mathematical and statistical machine learning has dominated the field, and this technique has proved highly successful, helping to solve many challenging problems throughout industry and academia.
'''blob = TextBlob(text)
blob.sentiment
#Sentiment(polarity=0.16180290297937355, subjectivity=0.42155589508530683)

Spacy 不包含文本分类功能,但可以作为单独的管道步骤进行扩展。下面的代码很长,包含几个 Spacy 内部对象和数据结构 - 以后的文章将更详细地解释这一点。

## train single label categorization from multi-label dataset
def convert_single_label(dataset, filename):db = DocBin()nlp = spacy.load('en_core_web_lg')for index, fileid in enumerate(dataset):cat_dict = {cat: 0 for cat in dataset.categories()}cat_dict[dataset.categories(fileid).pop()] = 1doc = nlp(get_text(fileid))doc.cats = cat_dictdb.add(doc)db.to_disk(filename)## load trained model and apply to text
nlp = spacy.load('textcat_multilabel_model/model-best')text = dataset.raw(42)doc = nlp(text)estimated_cats = sorted(doc.cats.items(), key=lambda i:float(i[1]), reverse=True)print(dataset.categories(42))
# ['orange']print(estimated_cats)
# [('nzdlr', 0.998894989490509), ('money-supply', 0.9969857335090637), ... ('orange', 0.7344251871109009),

SciKit Learn 是一个通用的机器学习库,提供许多聚类和分类算法。它仅适用于数字输入,因此需要对文本进行矢量化,例如使用 GenSims 预先训练的词向量,或使用内置的特征矢量化器。仅举一个例子,这里有一个片段,用于将原始文本转换为单词向量,然后将 KMeans分类器应用于它们。

from sklearn.feature_extraction import DictVectorizer
from sklearn.cluster import KMeansvectorizer = DictVectorizer(sparse=False)
x_train = vectorizer.fit_transform(dataset['train'])kmeans = KMeans(n_clusters=8, random_state=0, n_init="auto").fit(x_train)print(kmeans.labels_.shape)
# (8551, )print(kmeans.labels_)
# [4 4 4 ... 6 6 6]

最后,Gensim是一个专门用于大规模语料库的主题分类的库。以下代码片段加载内置数据集,矢量化每个文档的令牌,并执行聚类分析算法 LDA。仅在 CPU 上运行时,这些最多可能需要 15 分钟。

# source: https://radimrehurek.com/gensim/auto_examples/tutorials/run_lda.html, https://radimrehurek.com/gensim/auto_examples/howtos/run_downloader_api.htmlimport logging
import gensim.downloader as api
from gensim.corpora import Dictionary
from gensim.models import LdaModellogging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)docs = api.load('text8')
dictionary = Dictionary(docs)
corpus = [dictionary.doc2bow(doc) for doc in docs]_ = dictionary[0]
id2word = dictionary.id2token# Define and train the model
model = LdaModel(corpus=corpus,id2word=id2word,chunksize=2000,alpha='auto',eta='auto',iterations=400,num_topics=10,passes=20,eval_every=None
)print(model.num_topics)
# 10print(model.top_topics(corpus)[6])
#  ([(4.201401e-06, 'done'),
#    (4.1998064e-06, 'zero'),
#    (4.1478743e-06, 'eight'),
#    (4.1257395e-06, 'one'),
#    (4.1166854e-06, 'two'),
#    (4.085097e-06, 'six'),
#    (4.080696e-06, 'language'),
#    (4.050306e-06, 'system'),
#    (4.041121e-06, 'network'),
#    (4.0385708e-06, 'internet'),
#    (4.0379923e-06, 'protocol'),
#    (4.035399e-06, 'open'),
#    (4.033435e-06, 'three'),
#    (4.0334166e-06, 'interface'),
#    (4.030141e-06, 'four'),
#    (4.0283044e-06, 'seven'),
#    (4.0163245e-06, 'no'),
#    (4.0149207e-06, 'i'),
#    (4.0072555e-06, 'object'),
#    (4.007036e-06, 'programming')],

三、公用事业

3.1 语料库管理

NLTK为JSON格式的纯文本,降价甚至Twitter提要提供语料库阅读器。它通过传递文件路径来创建,然后提供基本统计信息以及迭代器以处理所有找到的文件。

from  nltk.corpus.reader.plaintext import PlaintextCorpusReadercorpus = PlaintextCorpusReader('wikipedia_articles', r'.*\.txt')print(corpus.fileids())
# ['AI_alignment.txt', 'AI_safety.txt', 'Artificial_intelligence.txt', 'Machine_learning.txt', ...]print(len(corpus.sents()))
# 47289print(len(corpus.words()))
# 1146248

Gensim 处理文本文件以形成每个文档的词向量表示,然后可用于其主要用例主题分类。文档需要由包装遍历目录的迭代器处理,然后将语料库构建为词向量集合。但是,这种语料库表示很难外部化并与其他库重用。以下片段是上面的摘录 - 它将加载 Gensim 中包含的数据集,然后创建一个基于词向量的表示。

import gensim.downloader as api
from gensim.corpora import Dictionarydocs = api.load('text8')
dictionary = Dictionary(docs)
corpus = [dictionary.doc2bow(doc) for doc in docs]print('Number of unique tokens: %d' % len(dictionary))
# Number of unique tokens: 253854print('Number of documents: %d' % len(corpus))
# Number of documents: 1701

3.2 数据

NLTK提供了几个即用型数据集,例如路透社新闻摘录,欧洲议会会议记录以及古腾堡收藏的开放书籍。请参阅完整的数据集和模型列表。

from nltk.corpus import reutersprint(len(reuters.fileids()))
#10788print(reuters.categories()[:43])
# ['acq', 'alum', 'barley', 'bop', 'carcass', 'castor-oil', 'cocoa', 'coconut', 'coconut-oil', 'coffee', 'copper', 'copra-cake', 'corn', 'cotton', 'cotton-oil', 'cpi', 'cpu', 'crude', 'dfl', 'dlr', 'dmk', 'earn', 'fuel', 'gas', 'gnp', 'gold', 'grain', 'groundnut', 'groundnut-oil', 'heat', 'hog', 'housing', 'income', 'instal-debt', 'interest', 'ipi', 'iron-steel', 'jet', 'jobs', 'l-cattle', 'lead', 'lei', 'lin-oil']

SciKit Learn包括来自新闻组,房地产甚至IT入侵检测的数据集,请参阅完整列表。这是后者的快速示例。

from sklearn.datasets import fetch_20newsgroupsdataset = fetch_20newsgroups()
dataset.data[1]
# "From: guykuo@carson.u.washington.edu (Guy Kuo)\nSubject: SI Clock Poll - Final Call\nSummary: Final call for SI clock reports\nKeywords: SI,acceleration,clock,upgrade\nArticle-I.D.: shelley.1qvfo9INNc3s\nOrganization: University of Washington\nLines: 11\nNNTP-Posting-Host: carson.u.washington.edu\n\nA fair number of brave souls who upgraded their SI clock oscillator have\nshared their experiences for this poll.

四、结论

        对于 Python 中的 NLP 项目,存在大量的库选择。为了帮助您入门,本文提供了 NLP 任务驱动的概述,其中包含紧凑的库解释和代码片段。从文本处理开始,您了解了如何从文本创建标记和引理。继续语法分析,您学习了如何生成词性标签和句子的语法结构。到达语义,识别文本中的命名实体以及文本情感也可以在几行代码中解决。对于语料库管理和访问预结构化数据集的其他任务,您还看到了库示例。总而言之,本文应该让你在处理核心 NLP 任务时为下一个 NLP 项目提供一个良好的开端。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/94316.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

Pikachu-xxe (xml外部实体注入漏洞)过关笔记

Pikachu-xxe过关笔记 有回显探测是否有回显file:///协议查看本地系统文件php://协议查看php源代码&#xff08;无法查看当前网页代码&#xff0c;只能看别的&#xff09;http://协议爆破开放端口&#xff08;两者的加载时间不同&#xff09; 无回显第一步第二步第三步 运行结果…

SketchUp Pro 2023 for Mac——打造你的创意之城

SketchUp Pro 2023 for Mac是一款专业级的3D建模软件&#xff0c;为你提供最佳的设计和创意工具。不论你是建筑师、室内设计师&#xff0c;还是爱好者&#xff0c;SketchUp Pro都能满足你对于创意表达的需求。 SketchUp Pro 2023拥有强大而直观的界面&#xff0c;让你轻松绘制…

C++算法 —— 动态规划(10)二维费用背包

文章目录 1、动规思路简介2、一和零3、盈利计划 背包问题需要读者先明白动态规划是什么&#xff0c;理解动规的思路&#xff0c;并不能给刚接触动规的人学习。所以最好是看了之前的动规博客&#xff0c;以及两个背包博客&#xff0c;或者你本人就已经懂得动规了。 1、动规思路简…

Java+Redis:布隆过滤器,打造高效数据过滤神器!

&#x1f389;&#x1f389;欢迎来到我的CSDN主页&#xff01;&#x1f389;&#x1f389; &#x1f3c5;我是尘缘&#xff0c;一个在CSDN分享笔记的博主。&#x1f4da;&#x1f4da; &#x1f449;点击这里&#xff0c;就可以查看我的主页啦&#xff01;&#x1f447;&#x…

postgresql16-新特性

postgresql16-新特性 any_value数组抽样数组排序 any_value any_value 返回任意一个值 select e.department_id ,count(*), any_value(e.last_name) from cps.public.employees e group by e.department_id ;数组抽样 -- 从数组中随机抽取一个元素 array_sample(数组&#…

C++:stl:stack、queue、priority_queuej介绍及模拟实现和容量适配器deque介绍。

本文主要介绍c中stl的栈、队列和优先级队列并对其模拟实现&#xff0c;对deque进行一定介绍并在栈和队列的模拟实现中使用。 目录 一、stack的介绍和使用 1.stack的介绍 2.stack的使用 3.stack的模拟实现 二、queue的介绍和使用 1.queue的介绍 2.queue的使用 3.queue的…

插上u盘显示格式化怎么办?U盘数据恢复可以这样做

U盘以其体积小巧、存储容量大、读写速度快的特点&#xff0c;在各种工作和个人使用场合中得到了广泛应用&#xff0c;因此深得用户好评。然而&#xff0c;在日常使用U盘的过程中&#xff0c;经常会遇到一些问题和挑战。 当我需要转移一些资料文件时&#xff0c;总是喜欢使用U盘…

基于spring boot的医疗管理系统 /基于java的医疗系统

摘 要 随着信息技术和网络技术的飞速发展&#xff0c;人类已进入全新信息化时代&#xff0c;传统管理技术已无法高效&#xff0c;便捷地管理信息。为了迎合时代需求&#xff0c;优化管理效率&#xff0c;各种各样的管理系统应运而生&#xff0c;各行各业相继进入信息管理时代&a…

一键智能视频语音转文本——基于PaddlePaddle语音识别与Python轻松提取视频语音并生成文案

前言 如今进行入自媒体行业的人越来越多&#xff0c;短视频也逐渐成为了主流&#xff0c;但好多时候是想如何把视频里面的语音转成文字&#xff0c;比如&#xff0c;录制会议视频后&#xff0c;做会议纪要&#xff1b;比如&#xff0c;网课教程视频&#xff0c;想要做笔记&…

[架构之路-225]:计算机硬件与体系结构 - 分类方法大汇总: RISC, CISC

目录 一、分类方法汇总 二、指令流和数据流的关系分类 三、Flynn 分类&#xff1a;指令并行处理 四、根据指令集架构&#xff08;ISA&#xff09;分类 4.1 分类 4.2 开源的RISC-V与封闭的RISC指令集架构比较 4.3 RISC-V的演进路径 4.4 RISC-V与中国芯片自研 4.4 五阶流…

【Kafka专题】Kafka收发消息核心参数详解

目录 前置知识课程内容一、从基础的客户端说起&#xff08;Java代码集成使用&#xff09;1.1 消息发送者源码示例1.2 消息消费者源码示例1.3 客户端使用小总结 *二、从客户端属性来梳理客户端工作机制*2.1 消费者分组消费机制2.2 生产者拦截器机制2.3 消息序列化机制2.4 消息分…

PHP 反序列化漏洞:身份标识

文章目录 参考环境访问修饰符访问修饰符PHP 与访问修饰符 手写身份标识身份标识定义身份标识控制字符 NUL在 PHP 中如何表示空字符&#xff1f; 通过空字符尝试构建包含非公共属性对象的序列化文本 空字符的传输控制字符的不可打印性结论另辟蹊径URL 字符编码将非 ASCII 字符文…

指针笔试题(带解析版)

题目2&#xff1a; struct MyStruct {int num;char* pcname;short sdate;char cha[2];short sba[4]; }*p; //结构体大小为32字节 //p0x100000 int main() {p 0x100000;printf("%p\n", p 0x1);//p&#xff1a;结构体指针&#xff0c;1下一个结构体指针&#xff0c;…

617. 合并二叉树

给你两棵二叉树&#xff1a; root1 和 root2 。 想象一下&#xff0c;当你将其中一棵覆盖到另一棵之上时&#xff0c;两棵树上的一些节点将会重叠&#xff08;而另一些不会&#xff09;。你需要将这两棵树合并成一棵新二叉树。合并的规则是&#xff1a;如果两个节点重叠&#…

Nginx在CentOS上的安装部署、RabbitMQ在CentOS上安装部署

目录 1. Nginx在CentOS上的安装部署 1.1 Nginx简介 1.2 Nginx安装 1.2.1 安装yum依赖程序 1.2.2 手动添加&#xff0c;nginx的yum仓库 1.2.3 通过yum安装最新稳定版的nginx 1.2.4 启动 1.2.5 配置防火墙放行 1.2.6 启动后浏览器输入Linux服务器的IP地址或主机…

最新AI创作系统/AI绘画系统/ChatGPT系统+H5源码+微信公众号版+支持Prompt应用

一、AI创作系统 SparkAi创作系统是基于国外很火的ChatGPT进行开发的AI智能问答系统和AI绘画系统。本期针对源码系统整体测试下来非常完美&#xff0c;可以说SparkAi是目前国内一款的ChatGPT对接OpenAI软件系统。那么如何搭建部署AI创作ChatGPT&#xff1f;小编这里写一个详细图…

postgresql-聚合函数增强功能

postgresql-聚合函数增强功能 按季度统计入职员工 按季度统计入职员工 select -- extract截取&#xff0c;按季度进行统计入职员工总数 extract(year from hire_date), count(*) filter(where extract(quarter from hire_date) 1) "第一季度", count(*) filter(wh…

一文拿捏分布式协调Redis客服端-Redisson

Redisson 1.介绍 Redisson - 是一个高级的分布式协调Redis客服端 , 专注于分布式系统开发&#xff0c;让用户可以在分布式系统中很方便的去使用Redis。 2.相关使用 1.加锁 //底层是lua脚本保证了加锁的原子性 // 一直等待获取锁&#xff0c;直到获取到锁为止! 默认锁的存活…

Maven - MacOS 快速安装

配置信息 Maven 版本&#xff1a;3.6.3Maven 地址&#xff1a;Index of /dist/maven/maven-3IDEA&#xff1a;2023 Tips&#xff1a;Maven 版本最好不要超过 3.8.0&#xff0c;最新版 Maven 会不兼容一些配置信息。上面的 Maven 地址里可以选择自己想下载的版本&#xff08;这…

STM32三种开发方式及标准库和HAL库的编程差异

三种开发方式 STM32基于标准库函数和HAL库编程差异_stm32库函数和hal库-CSDN博客本文目的是以串口通信来简要分析STM32使用标准库函数和HAL库函数编程的差异。目录&#xff08;一&#xff09;开发方式1.配置寄存器2.库函数3.HAL库&#xff08;二&#xff09;库函数与HAL库对比…