ggml文件格式

ggml文件格式

其中模型文件使用
ggml/examples/gpt-2/download-model.sh 脚本下载
我下载的是gpt-2 117M 这个
模型词汇编码表 encoder.json :

{"!": 0,"\"": 1,"#": 2,"$": 3,"%": 4,"&": 5,"'": 6,"(": 7,")": 8,"*": 9,"+": 10,",": 11,"-": 12,".": 13,"/": 14,"0": 15,"1": 16,"2": 17,"3": 18,"4": 19,"5": 20,"6": 21,"7": 22,......省略"\u0120Collider": 50253,"\u0120informants": 50254,"\u0120gazed": 50255,"<|endoftext|>": 50256
}

通过转换脚本理解ggml文件格式

ggml/examples/gpt-2/convert-ckpt-to-ggml.py :

# Convert a model checkpoint to a ggml compatible file
#
# Load the model using TensorFlow.
# Iterate over all variables and write them to a binary file.
#
# For each variable, write the following:
#   - Number of dimensions (int)
#   - Name length (int)
#   - Dimensions (int[n_dims])
#   - Name (char[name_length])
#   - Data (float[n_dims])
#
# By default, the bigger matrices are converted to 16-bit floats.
# This can be disabled by adding the "use-f32" CLI argument.
#
# At the start of the ggml file we write the model parameters
# and vocabulary.
#import sys
import json
import struct
import numpy as np
import tensorflow as tf# ref: https://github.com/openai/gpt-2/blob/master/src/encoder.py
def bytes_to_unicode():"""Returns list of utf-8 byte and a corresponding list of unicode strings.The reversible bpe codes work on unicode strings.This means you need a large # of unicode characters in your vocab if you want to avoid UNKs.When you're at something like a 10B token dataset you end up needing around 5K for decent coverage.This is a signficant percentage of your normal, say, 32K bpe vocab.To avoid that, we want lookup tables between utf-8 bytes and unicode strings.And avoids mapping to whitespace/control characters the bpe code barfs on."""bs = list(range(ord("!"), ord("~")+1))+list(range(ord("¡"), ord("¬")+1))+list(range(ord("®"), ord("ÿ")+1))cs = bs[:]n = 0for b in range(2**8):if b not in bs:bs.append(b)cs.append(2**8+n)n += 1cs = [chr(n) for n in cs]return dict(zip(bs, cs))# helper method to convert a numpy array to different float types
def convert_to_ftype(data, ftype):# fp16if ftype == 1:return data.astype(np.float16)assert False, "Invalid ftype: " + str(ftype)if len(sys.argv) < 3:print("Usage: convert-ckpt-to-ggml.py dir-model ftype\n")print("  ftype == 0 -> float32")print("  ftype == 1 -> float16")sys.exit(1)# output in the same directory as the model
dir_model = sys.argv[1]
fname_out = sys.argv[1] + "/ggml-model.bin"#加载词汇编码表
with open(dir_model + "/encoder.json", "r", encoding="utf-8") as f:encoder = json.load(f)#加载模型相关参数
with open(dir_model + "/hparams.json", "r", encoding="utf-8") as f:hparams = json.load(f)# possible data types
#   ftype == 0 -> float32
#   ftype == 1 -> float16
#
# map from ftype to string
ftype_str = ["f32", "f16"]#导出的ggml文件名称
ftype = 1
if len(sys.argv) > 2:ftype = int(sys.argv[2])if ftype < 0 or ftype > 1:print("Invalid ftype: " + str(ftype))sys.exit(1)fname_out = sys.argv[1] + "/ggml-model-" + ftype_str[ftype] + ".bin"#从tensorflow模型路径获取模型参数信息,各层参数名、形状等等
list_vars = tf.train.list_variables(dir_model)fout = open(fname_out, "wb")#写入ggml文件头
fout.write(struct.pack("i", 0x67676d6c)) # magic: ggml in hex
#写入model的hparams.json 数据
fout.write(struct.pack("i", hparams["n_vocab"]))
fout.write(struct.pack("i", hparams["n_ctx"]))
fout.write(struct.pack("i", hparams["n_embd"]))
fout.write(struct.pack("i", hparams["n_head"]))
fout.write(struct.pack("i", hparams["n_layer"]))
#写入量化类型
fout.write(struct.pack("i", ftype))# ASCII 码表
byte_encoder = bytes_to_unicode()
# ASCII 码表解码器
byte_decoder = {v:k for k, v in byte_encoder.items()}# 写入词汇编码表的长度
fout.write(struct.pack("i", len(encoder)))#遍历词汇编码表的每一个词汇编码
for key in encoder:#遍历一个词汇的编码转换为字符组成text单词text = bytearray([byte_decoder[c] for c in key])#写入单词长度fout.write(struct.pack("i", len(text)))#写入单词fout.write(text)#遍历模型变量 名称、形状
for name, shape in list_vars:print("Processing variable: " + name + " with shape: ", shape)#从模型中加载变量,并转换为 NumPy 数组,去除尺寸为 1 的维度data = tf.train.load_variable(dir_model, name).squeeze()#变量维数n_dims = len(data.shape);#为了提高效率 - 转置投影矩阵# for efficiency - transpose the projection matrices# "model/h.*/attn/c_attn/w"# "model/h.*/attn/c_proj/w"# "model/h.*/mlp/c_fc/w"# "model/h.*/mlp/c_proj/w"if name[-14:] == "/attn/c_attn/w" or \name[-14:] == "/attn/c_proj/w" or \name[-11:] == "/mlp/c_fc/w" or \name[-13:] == "/mlp/c_proj/w":print("  Transposing")data = data.transpose()#变量形状dshape = data.shape#量化权重参数,其他参数量化为float32ftype_cur = 0if ftype != 0:# match name:#  "model/wte"#  "model/h.*/attn/c_attn/w"#  "model/h.*/attn/c_proj/w"#  "model/h.*/mlp/c_fc/w"#  "model/h.*/mlp/c_proj/w"if name == "model/wte" or name[-2:] == "/w":print("  Converting to " + ftype_str[ftype])data = convert_to_ftype(data, ftype)ftype_cur = ftypeelse:print("  Converting to float32")data = data.astype(np.float32)ftype_cur = 0# headerstr = name.encode('utf-8')#ggml文件写入 模型变量维数,变量名称长度,量化类型fout.write(struct.pack("iii", n_dims, len(str), ftype_cur))#写入变量所有维度值for i in range(n_dims):fout.write(struct.pack("i", dshape[n_dims - 1 - i]))#写入变量名fout.write(str);# 写入变量data.tofile(fout)#写完模型参数,关闭文件
fout.close()print("Done. Output file: " + fname_out)
print("")

转换模型为ggml 文件

 python ggml/examples/gpt-2/convert-ckpt-to-ggml.py  ./models/gpt-2-117M/   1

输出如下:

Processing variable: model/h0/attn/c_attn/b with shape:  [2304]Converting to float32
Processing variable: model/h0/attn/c_attn/w with shape:  [1, 768, 2304]TransposingConverting to f16
Processing variable: model/h0/attn/c_proj/b with shape:  [768]Converting to float32
Processing variable: model/h0/attn/c_proj/w with shape:  [1, 768, 768]TransposingConverting to f16
Processing variable: model/h0/ln_1/b with shape:  [768]Converting to float32
Processing variable: model/h0/ln_1/g with shape:  [768]Converting to float32
Processing variable: model/h0/ln_2/b with shape:  [768]Converting to float32
Processing variable: model/h0/ln_2/g with shape:  [768]Converting to float32
Processing variable: model/h0/mlp/c_fc/b with shape:  [3072]Converting to float32
Processing variable: model/h0/mlp/c_fc/w with shape:  [1, 768, 3072]TransposingConverting to f16
Processing variable: model/h0/mlp/c_proj/b with shape:  [768]Converting to float32
Processing variable: model/h0/mlp/c_proj/w with shape:  [1, 3072, 768]TransposingConverting to f16
Processing variable: model/h1/attn/c_attn/b with shape:  [2304]Converting to float32
Processing variable: model/h1/attn/c_attn/w with shape:  [1, 768, 2304]TransposingConverting to f16
Processing variable: model/h1/attn/c_proj/b with shape:  [768]Converting to float32
Processing variable: model/h1/attn/c_proj/w with shape:  [1, 768, 768]TransposingConverting to f16
Processing variable: model/h1/ln_1/b with shape:  [768]Converting to float32
Processing variable: model/h1/ln_1/g with shape:  [768]Converting to float32
Processing variable: model/h1/ln_2/b with shape:  [768]Converting to float32
Processing variable: model/h1/ln_2/g with shape:  [768]Converting to float32
Processing variable: model/h1/mlp/c_fc/b with shape:  [3072]Converting to float32
Processing variable: model/h1/mlp/c_fc/w with shape:  [1, 768, 3072]TransposingConverting to f16
Processing variable: model/h1/mlp/c_proj/b with shape:  [768]Converting to float32
Processing variable: model/h1/mlp/c_proj/w with shape:  [1, 3072, 768]TransposingConverting to f16
Processing variable: model/h10/attn/c_attn/b with shape:  [2304]Converting to float32
Processing variable: model/h10/attn/c_attn/w with shape:  [1, 768, 2304]TransposingConverting to f16
Processing variable: model/h10/attn/c_proj/b with shape:  [768]Converting to float32
Processing variable: model/h10/attn/c_proj/w with shape:  [1, 768, 768]TransposingConverting to f16
Processing variable: model/h10/ln_1/b with shape:  [768]Converting to float32
Processing variable: model/h10/ln_1/g with shape:  [768]Converting to float32
Processing variable: model/h10/ln_2/b with shape:  [768]Converting to float32
Processing variable: model/h10/ln_2/g with shape:  [768]Converting to float32
Processing variable: model/h10/mlp/c_fc/b with shape:  [3072]Converting to float32
Processing variable: model/h10/mlp/c_fc/w with shape:  [1, 768, 3072]TransposingConverting to f16
Processing variable: model/h10/mlp/c_proj/b with shape:  [768]Converting to float32
Processing variable: model/h10/mlp/c_proj/w with shape:  [1, 3072, 768]TransposingConverting to f16
Processing variable: model/h11/attn/c_attn/b with shape:  [2304]Converting to float32
Processing variable: model/h11/attn/c_attn/w with shape:  [1, 768, 2304]TransposingConverting to f16
Processing variable: model/h11/attn/c_proj/b with shape:  [768]Converting to float32
Processing variable: model/h11/attn/c_proj/w with shape:  [1, 768, 768]TransposingConverting to f16
Processing variable: model/h11/ln_1/b with shape:  [768]Converting to float32
Processing variable: model/h11/ln_1/g with shape:  [768]Converting to float32
Processing variable: model/h11/ln_2/b with shape:  [768]Converting to float32
Processing variable: model/h11/ln_2/g with shape:  [768]Converting to float32
Processing variable: model/h11/mlp/c_fc/b with shape:  [3072]Converting to float32
Processing variable: model/h11/mlp/c_fc/w with shape:  [1, 768, 3072]TransposingConverting to f16
Processing variable: model/h11/mlp/c_proj/b with shape:  [768]Converting to float32
Processing variable: model/h11/mlp/c_proj/w with shape:  [1, 3072, 768]TransposingConverting to f16
Processing variable: model/h2/attn/c_attn/b with shape:  [2304]Converting to float32
Processing variable: model/h2/attn/c_attn/w with shape:  [1, 768, 2304]TransposingConverting to f16
Processing variable: model/h2/attn/c_proj/b with shape:  [768]Converting to float32
Processing variable: model/h2/attn/c_proj/w with shape:  [1, 768, 768]TransposingConverting to f16
Processing variable: model/h2/ln_1/b with shape:  [768]Converting to float32
Processing variable: model/h2/ln_1/g with shape:  [768]Converting to float32
Processing variable: model/h2/ln_2/b with shape:  [768]Converting to float32
Processing variable: model/h2/ln_2/g with shape:  [768]Converting to float32
Processing variable: model/h2/mlp/c_fc/b with shape:  [3072]Converting to float32
Processing variable: model/h2/mlp/c_fc/w with shape:  [1, 768, 3072]TransposingConverting to f16
Processing variable: model/h2/mlp/c_proj/b with shape:  [768]Converting to float32
Processing variable: model/h2/mlp/c_proj/w with shape:  [1, 3072, 768]TransposingConverting to f16
Processing variable: model/h3/attn/c_attn/b with shape:  [2304]Converting to float32
Processing variable: model/h3/attn/c_attn/w with shape:  [1, 768, 2304]TransposingConverting to f16
Processing variable: model/h3/attn/c_proj/b with shape:  [768]Converting to float32
Processing variable: model/h3/attn/c_proj/w with shape:  [1, 768, 768]TransposingConverting to f16
Processing variable: model/h3/ln_1/b with shape:  [768]Converting to float32
Processing variable: model/h3/ln_1/g with shape:  [768]Converting to float32
Processing variable: model/h3/ln_2/b with shape:  [768]Converting to float32
Processing variable: model/h3/ln_2/g with shape:  [768]Converting to float32
Processing variable: model/h3/mlp/c_fc/b with shape:  [3072]Converting to float32
Processing variable: model/h3/mlp/c_fc/w with shape:  [1, 768, 3072]TransposingConverting to f16
Processing variable: model/h3/mlp/c_proj/b with shape:  [768]Converting to float32
Processing variable: model/h3/mlp/c_proj/w with shape:  [1, 3072, 768]TransposingConverting to f16
Processing variable: model/h4/attn/c_attn/b with shape:  [2304]Converting to float32
Processing variable: model/h4/attn/c_attn/w with shape:  [1, 768, 2304]TransposingConverting to f16
Processing variable: model/h4/attn/c_proj/b with shape:  [768]Converting to float32
Processing variable: model/h4/attn/c_proj/w with shape:  [1, 768, 768]TransposingConverting to f16
Processing variable: model/h4/ln_1/b with shape:  [768]Converting to float32
Processing variable: model/h4/ln_1/g with shape:  [768]Converting to float32
Processing variable: model/h4/ln_2/b with shape:  [768]Converting to float32
Processing variable: model/h4/ln_2/g with shape:  [768]Converting to float32
Processing variable: model/h4/mlp/c_fc/b with shape:  [3072]Converting to float32
Processing variable: model/h4/mlp/c_fc/w with shape:  [1, 768, 3072]TransposingConverting to f16
Processing variable: model/h4/mlp/c_proj/b with shape:  [768]Converting to float32
Processing variable: model/h4/mlp/c_proj/w with shape:  [1, 3072, 768]TransposingConverting to f16
Processing variable: model/h5/attn/c_attn/b with shape:  [2304]Converting to float32
Processing variable: model/h5/attn/c_attn/w with shape:  [1, 768, 2304]TransposingConverting to f16
Processing variable: model/h5/attn/c_proj/b with shape:  [768]Converting to float32
Processing variable: model/h5/attn/c_proj/w with shape:  [1, 768, 768]TransposingConverting to f16
Processing variable: model/h5/ln_1/b with shape:  [768]Converting to float32
Processing variable: model/h5/ln_1/g with shape:  [768]Converting to float32
Processing variable: model/h5/ln_2/b with shape:  [768]Converting to float32
Processing variable: model/h5/ln_2/g with shape:  [768]Converting to float32
Processing variable: model/h5/mlp/c_fc/b with shape:  [3072]Converting to float32
Processing variable: model/h5/mlp/c_fc/w with shape:  [1, 768, 3072]TransposingConverting to f16
Processing variable: model/h5/mlp/c_proj/b with shape:  [768]Converting to float32
Processing variable: model/h5/mlp/c_proj/w with shape:  [1, 3072, 768]TransposingConverting to f16
Processing variable: model/h6/attn/c_attn/b with shape:  [2304]Converting to float32
Processing variable: model/h6/attn/c_attn/w with shape:  [1, 768, 2304]TransposingConverting to f16
Processing variable: model/h6/attn/c_proj/b with shape:  [768]Converting to float32
Processing variable: model/h6/attn/c_proj/w with shape:  [1, 768, 768]TransposingConverting to f16
Processing variable: model/h6/ln_1/b with shape:  [768]Converting to float32
Processing variable: model/h6/ln_1/g with shape:  [768]Converting to float32
Processing variable: model/h6/ln_2/b with shape:  [768]Converting to float32
Processing variable: model/h6/ln_2/g with shape:  [768]Converting to float32
Processing variable: model/h6/mlp/c_fc/b with shape:  [3072]Converting to float32
Processing variable: model/h6/mlp/c_fc/w with shape:  [1, 768, 3072]TransposingConverting to f16
Processing variable: model/h6/mlp/c_proj/b with shape:  [768]Converting to float32
Processing variable: model/h6/mlp/c_proj/w with shape:  [1, 3072, 768]TransposingConverting to f16
Processing variable: model/h7/attn/c_attn/b with shape:  [2304]Converting to float32
Processing variable: model/h7/attn/c_attn/w with shape:  [1, 768, 2304]TransposingConverting to f16
Processing variable: model/h7/attn/c_proj/b with shape:  [768]Converting to float32
Processing variable: model/h7/attn/c_proj/w with shape:  [1, 768, 768]TransposingConverting to f16
Processing variable: model/h7/ln_1/b with shape:  [768]Converting to float32
Processing variable: model/h7/ln_1/g with shape:  [768]Converting to float32
Processing variable: model/h7/ln_2/b with shape:  [768]Converting to float32
Processing variable: model/h7/ln_2/g with shape:  [768]Converting to float32
Processing variable: model/h7/mlp/c_fc/b with shape:  [3072]Converting to float32
Processing variable: model/h7/mlp/c_fc/w with shape:  [1, 768, 3072]TransposingConverting to f16
Processing variable: model/h7/mlp/c_proj/b with shape:  [768]Converting to float32
Processing variable: model/h7/mlp/c_proj/w with shape:  [1, 3072, 768]TransposingConverting to f16
Processing variable: model/h8/attn/c_attn/b with shape:  [2304]Converting to float32
Processing variable: model/h8/attn/c_attn/w with shape:  [1, 768, 2304]TransposingConverting to f16
Processing variable: model/h8/attn/c_proj/b with shape:  [768]Converting to float32
Processing variable: model/h8/attn/c_proj/w with shape:  [1, 768, 768]TransposingConverting to f16
Processing variable: model/h8/ln_1/b with shape:  [768]Converting to float32
Processing variable: model/h8/ln_1/g with shape:  [768]Converting to float32
Processing variable: model/h8/ln_2/b with shape:  [768]Converting to float32
Processing variable: model/h8/ln_2/g with shape:  [768]Converting to float32
Processing variable: model/h8/mlp/c_fc/b with shape:  [3072]Converting to float32
Processing variable: model/h8/mlp/c_fc/w with shape:  [1, 768, 3072]TransposingConverting to f16
Processing variable: model/h8/mlp/c_proj/b with shape:  [768]Converting to float32
Processing variable: model/h8/mlp/c_proj/w with shape:  [1, 3072, 768]TransposingConverting to f16
Processing variable: model/h9/attn/c_attn/b with shape:  [2304]Converting to float32
Processing variable: model/h9/attn/c_attn/w with shape:  [1, 768, 2304]TransposingConverting to f16
Processing variable: model/h9/attn/c_proj/b with shape:  [768]Converting to float32
Processing variable: model/h9/attn/c_proj/w with shape:  [1, 768, 768]TransposingConverting to f16
Processing variable: model/h9/ln_1/b with shape:  [768]Converting to float32
Processing variable: model/h9/ln_1/g with shape:  [768]Converting to float32
Processing variable: model/h9/ln_2/b with shape:  [768]Converting to float32
Processing variable: model/h9/ln_2/g with shape:  [768]Converting to float32
Processing variable: model/h9/mlp/c_fc/b with shape:  [3072]Converting to float32
Processing variable: model/h9/mlp/c_fc/w with shape:  [1, 768, 3072]TransposingConverting to f16
Processing variable: model/h9/mlp/c_proj/b with shape:  [768]Converting to float32
Processing variable: model/h9/mlp/c_proj/w with shape:  [1, 3072, 768]TransposingConverting to f16
Processing variable: model/ln_f/b with shape:  [768]Converting to float32
Processing variable: model/ln_f/g with shape:  [768]Converting to float32
Processing variable: model/wpe with shape:  [1024, 768]Converting to float32
Processing variable: model/wte with shape:  [50257, 768]Converting to f16Done. Output file: ./models/gpt-2-117M//ggml-model-f16.bin

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/840503.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

佩戴安全头盔监测识别摄像机

佩戴安全头盔是重要的安全措施&#xff0c;尤其在工地、建筑工程和工业生产等领域&#xff0c;安全头盔的佩戴对于工人的生命安全至关重要。为了更好地管理和监控佩戴安全头盔的情况&#xff0c;监测识别摄像机成为了一项重要的工具。监测识别摄像机可以通过智能技术监测并记录…

Java系统学习资料(备忘)

本文记录下一下优秀的Java系统学习的资料&#xff1a; java程序员从小工到专家成神之路&#xff08;2024版&#xff09; | 程序那些事 Java技能树

JavaScript-数组的增删改查

数组的操作一共有四种&#xff1a; 查询数组数据修改数组中元素的值数组添加新的数据删除数组中的元素 数组的初始化 有些编程语言的数组初始化是用{}包着的&#xff0c;而JS的数组初始化用[] let num[2,6,1,77,52,25,7]; 数组的查询 想要具体查询数组中的某个元素 可以用数…

项目9-网页聊天室7(消息传输模块之解决之前存在的问题:获取最后一条消息)

1.服务器中转的原因 IPV4不够用 &#xff08;1&#xff09;使用服务器中转,最大原因, 就是 NAT 背景下,两个内网的设备无法直接进行通信(不在同一个局域网内) &#xff08;2&#xff09;另外一个原因,通过服务器中转,是更容易在服务器这里记录历史消息随时方便咱们来查询历史记…

C语言实现Hash Map(2):Map代码实现详解

在上一节C语言实现Hash Map(1)&#xff1a;Map基础知识入门中&#xff0c;我们介绍了Map的基础概念和在C中的用法。但我写这两篇文章的目的是&#xff0c;能够在C语言中实现这样的一个数据结构&#xff0c;毕竟有时我们的项目中可能会用到Map&#xff0c;但是C语言库中并没有提…

C语言学习笔记之指针(一)

目录 什么是指针&#xff1f; 指针和指针类型 指针的类型 指针类型的意义 指针-整数 指针的解引用 指针 - 指针 指针的关系运算 野指针 什么是野指针&#xff1f; 野指针的成因 如何规避野指针&#xff1f; 二级指针 什么是指针&#xff1f; 在介绍指针之前&#…

第十五届“北斗杯”全国青少年空天科技体验与创新大赛安徽赛区阜阳市解读会议

5月19日&#xff0c;第十五届“北斗杯”全国青少年空天科技体验与创新大赛安徽赛区阜阳解读活动在阜阳市图书馆隆重举行。共青团阜阳市委员会学少部副部长丁晓龙、阜阳市师范大学物理系副主任黄银生教授、安徽科技报阜阳站站长李伟、市人工智能学会秘书长郭广泽、“北斗杯”安徽…

【开发 | 环境配置】解决 VSCode 编写 eBPF 程序找不到头文件

问题描述&#xff1a; 在使用 vscode 编写 eBPF 程序时&#xff0c;如果不做一些头文件定位的操作&#xff0c;默认情况下头文件总是带有“红色下划线”&#xff0c;并且大部分的变量不会有提示与补全。 在编写代码文件较小时&#xff08;或者功能需求小时&#xff09;并不会…

【Spring】SpringFrameWork框架简介

1、简介 Spring Framework为基于Java的现代企业应用程序提供了一个全面的编程和配置模型。它是一个集成了IOC&#xff08;控制反转&#xff09;、DI&#xff08;依赖注入&#xff09;与AOP&#xff08;面向切面编程&#xff09;容器技术的框架。Spring Framework是Spring技术栈…

开放式耳机什么牌子好用?五大高分力作安利,不容错过!

​开放式耳机如今备受瞩目&#xff0c;其独特的不入耳设计不仅避免了长时间佩戴对耳道的压力&#xff0c;还维护了耳朵的卫生健康&#xff0c;特别受运动爱好者和耳机发烧友青睐。然而&#xff0c;市场上的开放式耳机品质良莠不齐&#xff0c;让许多消费者在选择时陷入困惑。作…

嵌入式全栈开发学习笔记---C语言笔试复习大全20

目录 指针数组 数组指针 指针和二维数组 通过指针访问二维数组 通过数组指针访问二维数组 用指针表示二维数组并访问 地址等级 0级地址&#xff1a; 一级地址&#xff1a; 二级地址&#xff1a; 三级地址&#xff1a; 总结 指针的指针 命令行参数 上一篇复习了指…

路由_传递params参数和query参数

传递params参数 传递params参数可以直接在路径后面加上参数&#xff1a; 上述就是在路径变化的时候传过去三个值分别为哈哈、嘿嘿、呵呵的参数 但是这样的话会被认为三个参数是路径的一部分&#xff0c;计算机没有办法区分哪些是路径哪些是参数&#xff0c;所以首先要在这条路…

Qt TreeWidget详细说明

一.定义 Qt的TreeWidget是一个用于展示树状数据的控件&#xff0c;可以显示带有父子关系的数据。TreeWidget可以包括一列或多列数据&#xff0c;并且可以提供用户对数据进行展开和折叠的功能。 TreeWidget中的数据是以树的形式展示的&#xff0c;每个节点可以包含子节点&#x…

React useState数组新增和删除项

在React中&#xff0c;我们可以使用useState钩子来管理组件的状态&#xff0c;其中包括数组。如何在React函数组件中对数组进行增加和删除项的操作&#xff1f; 新增项时&#xff1a;我们可以对原数组进行解构&#xff0c;并把新增项一起加入到新数组&#xff1b; 删除项时&…

LeetCode 264 —— 丑数 II

阅读目录 1. 题目2. 解题思路3. 代码实现 1. 题目 2. 解题思路 第一个丑数是 1 1 1&#xff0c;由于丑数的质因子只包含 2 、 3 、 5 2、3、5 2、3、5&#xff0c;所以后面的丑数肯定是前面的丑数分别乘以 2 、 3 、 5 2、3、5 2、3、5 后得到的数字。 这样&#xff0c;我…

电脑同时配置两个版本mysql数据库常见问题

1.配置时&#xff0c;要把bin中的mysql.exe和mysqld.exe 改个名字&#xff0c;不然两个版本会重复&#xff0c;当然&#xff0c;在初始化数据库的时候&#xff0c;如果时57版本的&#xff0c;就用mysql57(已经改名的)和mysqld57 代替 mysql 和 mysqld 例如 mysql -u root -p …

LLM 大模型学习必知必会系列(十):基于AgentFabric实现交互式智能体应用,Agent实战

LLM 大模型学习必知必会系列(十)&#xff1a;基于AgentFabric实现交互式智能体应用,Agent实战 0.前言 **Modelscope **是一个交互式智能体应用基于ModelScope-Agent&#xff0c;用于方便地创建针对各种现实应用量身定制智能体&#xff0c;目前已经在生产级别落地。AgentFabri…

01.msf

文章目录 永恒之蓝下载msfconsolemsfvenom 永恒之蓝 下载 msdn.itellyou.cn msfconsole M e t a s p l o i t C y b e r M i s s i l e C o m m a n d Metasploit Cyber Missile Command MetasploitCyberMissileCommand 的简称 search ms17_010 use 0 或者 use exploit/wind…

Java入门基础教程-Java基础语法

Java是一种广泛使用的编程语言&#xff0c;其语法简洁明了&#xff0c;易于学习和掌握。本文帮助大家快速了解Java编程的基本概念和语法规则。 一、Java程序结构 Java程序由类&#xff08;Class&#xff09;组成&#xff0c;每个类可以包含变量&#xff08;Variable&#xff…

telnet网络疏通——脚本案例

这段脚本主要是使用 bash 脚本进行 Telnet 连接测试&#xff0c;遍历了三个 IP 地址&#xff08;192.168.20.11、192.168.20.22、192.168.20.33&#xff09;以及三个端口号&#xff08;22、80、443&#xff09;。对每个 IP 地址和端口组合进行 Telnet 连接测试&#xff0c;如果…