Chinese-LLaMA-Alpaca-2模型量化部署测试

简介

Chinese-LLaMA-Alpaca-2基于Meta发布的可商用大模型Llama-2开发, 是中文LLaMA&Alpaca大模型的第二期项目.

量化

模型的下载还是应用脚本

bash hfd.sh hfl/chinese-alpaca-2-13b --tool aria2c -x 8

应用llama.cpp进行量化, 主要参考该教程.
其中比较折腾的是与BLAS一起编译.

OpenBLAS

这个真是一言难尽, 非常折腾也没起作用(issue1 & issue2). 而且提升很小, 后续再尝试能不能成功.

cuBLAS

这个提升较为明显, 在有Nvidia GPU的情况下, 需要折腾应该就只有非root用户手动安装一下CUDA toolkit, 然后在CMakeLists.txt中指定一下路径即可.
手动安装CUDA toolkitcuDnn后, 在CMakeLists.txt中加入:

# ${cuda path}示例: /home/orange/software/cuda-118
set(CUDA_TOOLKIT_ROOT_DIR ${cuda path})

进行编译即可

mkdir build
cd build
cmake .. -DLLAMA_CUBLAS=ON
cmake --build . --config Release

量化

编译完成llama.cpp后, 进行量化

python convert.py zh-models/chinese-alpaca-2-7b/
./build/bin/quantize ./zh-models/chinese-alpaca-2-7b/ggml-model-f16.gguf ./zh-models/chinese-alpaca-2-7b/ggml-model-q8_0.gguf q8_0

部署测试

直接使用./build/bin/main -m ./zh-models/chinese-alpaca-2-7b/ggml-model-q8_0.gguf不能进行对话, 加入参数-i表示交互模式, 也可以使用教程中的脚本形式.
按照tutorial, 新建chat.sh文件并填入以下内容

#!/bin/bash# temporary script to chat with Chinese Alpaca-2 model
# usage: ./chat.sh alpaca2-ggml-model-path your-first-instructionSYSTEM='You are a helpful assistant. 你是一个乐于助人的助手。'
FIRST_INSTRUCTION=$2./build/bin/main -m $1 \
--color -i -c 4096 -t 8 --temp 0.5 --top_k 40 --top_p 0.9 --repeat_penalty 1.1 \
--in-prefix-bos --in-prefix ' [INST] ' --in-suffix ' [/INST]' -p \
"[INST] <<SYS>>
$SYSTEM
<</SYS>>$FIRST_INSTRUCTION [/INST]"

运行

bash chat.sh ./zh-models/chinese-alpaca-2-7b/ggml-model-q8_0.gguf '请列举5条文明乘车的建议'

成功实现对话, 部署测试成功.

测试

下载并解压测试数据

chinese-alpaca-2-1.3b

测试命令:

./build/bin/perplexity -m ./zh-models/chinese-alpaca-2-1.3b/ggml-model-q8_0.gguf -f ./wikitext-2-raw/wiki.test.raw -ngl 20

由于使用cmake编译, 可执行文件位于build/bin下, 注意执行文件和模型, 数据的路径替换即可.
测试数据如下:

main: build = 2509 (50ccaf5e)
main: built with cc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0 for x86_64-linux-gnu
main: seed  = 1711210157
llama_model_loader: loaded meta data with 23 key-value pairs and 39 tensors from ./zh-models/chinese-alpaca-2-1.3b/ggml-model-q8_0.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = LLaMA v2
llama_model_loader: - kv   2:                           llama.vocab_size u32              = 55296
llama_model_loader: - kv   3:                       llama.context_length u32              = 4096
llama_model_loader: - kv   4:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   5:                          llama.block_count u32              = 4
llama_model_loader: - kv   6:                  llama.feed_forward_length u32              = 11008
llama_model_loader: - kv   7:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   8:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   9:              llama.attention.head_count_kv u32              = 32
llama_model_loader: - kv  10:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  11:                       llama.rope.freq_base f32              = 10000.000000
llama_model_loader: - kv  12:                          general.file_type u32              = 7
llama_model_loader: - kv  13:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  14:                      tokenizer.ggml.tokens arr[str,55296]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  15:                      tokenizer.ggml.scores arr[f32,55296]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  16:                  tokenizer.ggml.token_type arr[i32,55296]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  17:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  18:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  19:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  20:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  21:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  22:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:    9 tensors
llama_model_loader: - type q8_0:   30 tensors
llm_load_vocab: mismatch in special tokens definition ( 889/55296 vs 259/55296 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 55296
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 4096
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 32
llm_load_print_meta: n_layer          = 4
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 4096
llm_load_print_meta: n_embd_v_gqa     = 4096
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 11008
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 4096
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = ?B
llm_load_print_meta: model ftype      = Q8_0
llm_load_print_meta: model params     = 1.26 B
llm_load_print_meta: model size       = 1.25 GiB (8.50 BPW)
llm_load_print_meta: general.name     = LLaMA v2
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: PAD token        = 0 '<unk>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:   no
ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes
ggml_cuda_init: found 2 CUDA devices:Device 0: NVIDIA A100-PCIE-40GB, compute capability 8.0, VMM: yesDevice 1: NVIDIA A100-PCIE-40GB, compute capability 8.0, VMM: yes
llm_load_tensors: ggml ctx size =    0.05 MiB
llm_load_tensors: offloading 4 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 5/5 layers to GPU
llm_load_tensors:        CPU buffer size =   229.50 MiB
llm_load_tensors:      CUDA0 buffer size =   615.28 MiB
llm_load_tensors:      CUDA1 buffer size =   434.61 MiB
...............................
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: n_batch    = 2048
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:      CUDA0 KV buffer size =    96.00 MiB
llama_kv_cache_init:      CUDA1 KV buffer size =    32.00 MiB
llama_new_context_with_model: KV self size  =  128.00 MiB, K (f16):   64.00 MiB, V (f16):   64.00 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =   432.00 MiB
llama_new_context_with_model: pipeline parallelism enabled (n_copies=4)
llama_new_context_with_model:      CUDA0 compute buffer size =   208.01 MiB
llama_new_context_with_model:      CUDA1 compute buffer size =   200.01 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =    24.02 MiB
llama_new_context_with_model: graph nodes  = 136
llama_new_context_with_model: graph splits = 3system_info: n_threads = 76 / 152 | AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 |
perplexity: tokenizing the input ..
perplexity: tokenization took 1187.98 ms
perplexity: calculating perplexity over 655 chunks, n_ctx=512, batch_size=2048, n_seq=4
perplexity: 0.06 seconds per pass - ETA 0.17 minutes
[1]35.2055,[2]3151.7331,[3]9745.8526,[4]3056.9236
......
[653]1226.9638,[654]1219.7704,[655]1213.9217,
Final estimate: PPL = 1213.9217 +/- 16.09822llama_print_timings:        load time =    2998.83 ms
llama_print_timings:      sample time =       0.00 ms /     1 runs   (    0.00 ms per token,      inf tokens per second)
llama_print_timings: prompt eval time =    8371.13 ms / 335360 tokens (    0.02 ms per token, 40061.49 tokens per second)
llama_print_timings:        eval time =       0.00 ms /     1 runs   (    0.00 ms per token,      inf tokens per second)
llama_print_timings:       total time =   17937.28 ms / 335361 tokens

chinese-alpaca-2-13b

测试命令:

./build/bin/perplexity -m ./zh-models/chinese-alpaca-2-13b/ggml-model-q8_0.gguf -f ./wikitext-2-raw/wiki.test.raw -ngl 10

测试数据如下:

main: build = 2509 (50ccaf5e)
main: built with cc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0 for x86_64-linux-gnu
main: seed  = 1711210012
llama_model_loader: loaded meta data with 21 key-value pairs and 363 tensors from ./zh-models/chinese-alpaca-2-13b/ggml-model-q8_0.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = LLaMA v2
llama_model_loader: - kv   2:                           llama.vocab_size u32              = 55296
llama_model_loader: - kv   3:                       llama.context_length u32              = 4096
llama_model_loader: - kv   4:                     llama.embedding_length u32              = 5120
llama_model_loader: - kv   5:                          llama.block_count u32              = 40
llama_model_loader: - kv   6:                  llama.feed_forward_length u32              = 13824
llama_model_loader: - kv   7:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   8:                 llama.attention.head_count u32              = 40
llama_model_loader: - kv   9:              llama.attention.head_count_kv u32              = 40
llama_model_loader: - kv  10:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  11:                          general.file_type u32              = 7
llama_model_loader: - kv  12:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  13:                      tokenizer.ggml.tokens arr[str,55296]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  14:                      tokenizer.ggml.scores arr[f32,55296]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr[i32,55296]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  16:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  17:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  18:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  19:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  20:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   81 tensors
llama_model_loader: - type q8_0:  282 tensors
llm_load_vocab: mismatch in special tokens definition ( 889/55296 vs 259/55296 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 55296
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 4096
llm_load_print_meta: n_embd           = 5120
llm_load_print_meta: n_head           = 40
llm_load_print_meta: n_head_kv        = 40
llm_load_print_meta: n_layer          = 40
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 5120
llm_load_print_meta: n_embd_v_gqa     = 5120
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 13824
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 4096
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 13B
llm_load_print_meta: model ftype      = Q8_0
llm_load_print_meta: model params     = 13.25 B
llm_load_print_meta: model size       = 13.12 GiB (8.50 BPW)
llm_load_print_meta: general.name     = LLaMA v2
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:   no
ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes
ggml_cuda_init: found 2 CUDA devices:Device 0: NVIDIA A100-PCIE-40GB, compute capability 8.0, VMM: yesDevice 1: NVIDIA A100-PCIE-40GB, compute capability 8.0, VMM: yes
llm_load_tensors: ggml ctx size =    0.42 MiB
llm_load_tensors: offloading 10 repeating layers to GPU
llm_load_tensors: offloaded 10/41 layers to GPU
llm_load_tensors:        CPU buffer size = 13431.58 MiB
llm_load_tensors:      CUDA0 buffer size =  1607.23 MiB
llm_load_tensors:      CUDA1 buffer size =  1607.23 MiB
..................................................................................................
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: n_batch    = 2048
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:  CUDA_Host KV buffer size =  1200.00 MiB
llama_kv_cache_init:      CUDA0 KV buffer size =   200.00 MiB
llama_kv_cache_init:      CUDA1 KV buffer size =   200.00 MiB
llama_new_context_with_model: KV self size  = 1600.00 MiB, K (f16):  800.00 MiB, V (f16):  800.00 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =   432.00 MiB
llama_new_context_with_model:      CUDA0 compute buffer size =   404.88 MiB
llama_new_context_with_model:      CUDA1 compute buffer size =   204.00 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =    24.00 MiB
llama_new_context_with_model: graph nodes  = 1324
llama_new_context_with_model: graph splits = 335system_info: n_threads = 76 / 152 | AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 |
perplexity: tokenizing the input ..
perplexity: tokenization took 728.604 ms
perplexity: calculating perplexity over 655 chunks, n_ctx=512, batch_size=2048, n_seq=4
perplexity: 7.36 seconds per pass - ETA 20.08 minutes
[1]4.8998,[2]5.3381,[3]6.0623,
......
[654]6.3736,[655]6.3713,
Final estimate: PPL = 6.3713 +/- 0.03705llama_print_timings:        load time =   17705.89 ms
llama_print_timings:      sample time =       0.00 ms /     1 runs   (    0.00 ms per token,      inf tokens per second)
llama_print_timings: prompt eval time = 1130068.93 ms / 335360 tokens (    3.37 ms per token,   296.76 tokens per second)
llama_print_timings:        eval time =       0.00 ms /     1 runs   (    0.00 ms per token,      inf tokens per second)
llama_print_timings:       total time = 1137969.38 ms / 335361 tokens

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/776250.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

在Vue中创建生产和开发环境

官方文档参考&#xff1a;模式和环境变量 | Vue CLI 在Vue中创建生产和开发环境配置&#xff0c;通常是通过环境变量和Webpack的模式来区分。 1.在项目根目录下创建.env文件&#xff0c;用于通用配置。 # .env VUE_APP_API_URLhttps://api.example.com 2.创建一个.env.deve…

FPGA工程师面试时会被HR问到的问题(3)

FPGA工程师面试时会被HR问到的问题&#xff08;3&#xff09; 面试具体问题集锦第三弹来啦&#xff0c;小伙伴们码住&#xff01; 1、你通常如何对待别人的批评&#xff1f; 回答提示&#xff1a;①沈默是金&#xff0c;不必说什么&#xff0c;否则情况更糟&#xff0c;不过…

OD C卷 - 反射计数

反射计数&#xff08;200&#xff09; 给定一个包含0 、1的二维矩阵&#xff1b;一个物体从给定的初始位置出发&#xff0c;在给定的速度下移动&#xff0c;遇到矩阵的边缘则发生镜面反射&#xff0c;无论物体经过0还是1&#xff0c;都不影响其速度&#xff1b;经过t时间单位后…

【unity】如何汉化unity编译器

在【unity】如何汉化unity Hub这篇文章中&#xff0c;我们已经完成了unity Hub的汉化&#xff0c;现在让我们对unity Hub安装的编译器也进行下汉化处理。 第一步&#xff1a;在unity Hub软件左侧栏目中点击安装&#xff0c;选择需要汉化的编译器&#xff0c;再点击设置图片按钮…

集中监控:网络设备、安全设备、服务器以及各类业务系统一体化运维

一、项目背景和目标 随着企业信息化建设的不断深入&#xff0c;网络设备、安全防护设备、服务器以及各类业务系统的数量和复杂性日益增加&#xff0c;传统的运维方式已无法满足企业对效率、安全和稳定性的需求。因此&#xff0c;本方案旨在为企业构建一套运维一体化管理平台&am…

Jmeter基础篇(18)压测过程中的注意事项

一、测试计划设计&#xff1a; 1、场景设计&#xff1a;需要基于实际业务需求设计合理的并发用户模型、事务和思考时间&#xff0c;模拟真实用户的操作行为。 2、目标明确&#xff1a;定义明确的性能指标&#xff08;如响应时间、吞吐量、并发用户数、错误率等&#xff09;和性…

淘宝自动发货接口是指淘宝开放平台提供的一种接口,用于实现商家在淘宝平台上自动发货的功能

淘宝自动发货接口是指淘宝开放平台提供的一种接口&#xff0c;用于实现商家在淘宝平台上自动发货的功能。通过该接口&#xff0c;商家可以将订单信息与物流信息传递给淘宝平台&#xff0c;由平台自动完成订单发货的操作&#xff0c;提高发货效率和准确性。 淘宝自动发货接口的…

手机termux免root安装kali:一步到位+图形界面_termux安装kali-

1.工具 安卓包括鸿蒙手机、WiFi、充足的电量、脑子 2.浏览器搜索termuxvnc viewer下载安装。 3.对抗华为纯净模式需要一些操作先断网弹窗提示先不开等到继续安装的时候连上网智能检测过后就可以了 termux正常版本可以通过智能监测失败了就说明安装包是盗版 4.以后出现类似…

Jenkins常用插件安装及全局配置

Jenkins常用插件安装及全局配置 前言 ​ Jenkins是一个流行的持续集成工具&#xff0c;通过安装适用的插件&#xff0c;可以扩展Jenkins的功能&#xff0c;并与其他工具和系统集成。本文将介绍一些常用的Jenkins插件以及安装和配置的步骤。通过安装和配置这些常用插件&#xf…

【EI会议征稿通知】电子、通信与智能科学国际会议(ECIS 2024)

电子、通信与智能科学国际会议&#xff08;ECIS 2024&#xff09; The International Conference on Electronics, Communications and Intelligent Science 电子、通信与智能科学国际会议&#xff08;ECIS 2024&#xff09;将于2024年05月24日-05月27日在中国长沙召开。ECIS…

蓝桥杯day15刷题日记

P8748 [蓝桥杯 2021 省 B] 时间显示 思路&#xff1a;好奇怪的橙题&#xff0c;简单的运算就解决了 #include <iostream> using namespace std; long long n; int main() {cin>>n;n/1000;int hn/3600%24;int mn%3600/60;int sn%3600%60;printf("%02d:%02d:%…

指数强劲反弹,计算机板块表现活跃,北京两融开户佣金和融资融券利息率最低多少?哪个券商最低?支持量化交易?

股市行情的波动是由多种因素共同影响的&#xff0c;其中包括市场情绪、投资者对经济走势的预期、政策变化等等。本文提到的指数强劲反弹和计算机板块的活跃表现可能是由以下几个因素所推动的&#xff1a; 市场情绪改善&#xff1a;当投资者对市场的信心增加时&#xff0c;他们更…

C语言字节对齐关键字__attribute__((aligned(n)))的使用

0 前言 在进行嵌入式开发的过程中&#xff0c;我们经常会见到对齐操作。这些对齐操作有些是为了便于实现指针操作&#xff0c;有些是为了加速对内存的访问。因此&#xff0c;学习如何使用对齐关键字是对于嵌入式开发是很有必要的。 1 对齐规则 1.0 什么叫做对齐 众所周知&a…

谈谈变压器中的位置编码

变压器中的位置编码 一、说明 在上一期的“Transformers for Everyone”系列中&#xff0c;我们介绍了 Transformer 的概念&#xff0c;并深入研究了第一个关键架构元素&#xff1a;输入嵌入。如果你错过了第一集&#xff0c;你可以通过阅读来赶上&#xff1a;适合所有人的变形…

IRIS / Chronicles 数据库结构

对于我们用得最多的关系型数据库来说&#xff0c;首先有的是数据库名字&#xff0c;然后是表名字&#xff0c;然后就是字段名&#xff0c;随后就是一条一条的数据。 对于 IRIS 来说&#xff0c;因为是使用的层级数据库&#xff0c;所以上面的定义就不能完全的照搬了&#xff0…

【傅里叶变换、短时傅里叶变换、小波变换】

傅里叶&#xff1a;可以知道信号中的成分&#xff0c;但对非平稳过程&#xff0c;不能看出各成分出现的时刻短时傅里叶变换-&#xff1a;加固定窗的傅里叶变换&#xff0c;无法满足非稳态信号变化的频率的需求小波变换&#xff1a;时域能量有限&#xff0c;频域带通滤波 一、傅…

系统可靠性分析与设计相关知识总结

一、软件可靠性定义 软件可靠性使软件产品在规定的条件下和规定的时间区间内完成规定的功能的能力。是软件系统在应用或系统错误面前&#xff0c;在意外或错误使用的情况下维持软件系统功能特性的基本能力。 规定的条件&#xff1a;直接与软件运行相关的使用该软件的计算机系统…

Android知识 - 代码混淆ProGuard规则介绍

ProGuard 的规则及示例 规则概述 ProGuard 是一个代码优化工具&#xff0c;它通过移除未使用的代码、重命名类、字段和方法等方式来减小应用的大小。在 ProGuard 的配置文件中&#xff0c;我们可以定义一系列的规则来控制优化和混淆的过程。 规则语法 ProGuard 的规则通常包…

husky配置实现代码提交前校验与规范提交信息

husky是一个Git Hook管理工具&#xff0c;主要用于实现提交前eslint校验和commit信息的规范校验。 Husky 的原理是让我们在项目根目录中写一个配置文件&#xff0c;然后在安装 Husky的时候把配置文件和 Git Hook 关联起来&#xff0c;这样我们就能在团队中使用 Git Hook 了。 …

Matplotlib数据可视化实战-2绘制折线图(2)

2.11营业额可视化 已知某学校附近一个烧烤店2022年每个月的营业额如下图所示。编写程序绘制折线图对该烧烤店全年营业额进行可视化&#xff0c;使用红色点画线连接每个月的数据&#xff0c;并在每个月的数据处使用三角形进行标记。 烧烤店营业额 月份123456789101112营业额/万…