wsl-oracle 安装 omlutils
- 1. 安装 cmake 和 gcc-c++
- 2. 安装 omlutils
- 3. 使用 omlutils 创建 onnx 模型
1. 安装 cmake 和 gcc-c++
sudo dnf install -y cmake gcc-c++
2. 安装 omlutils
pip install omlutils-0.10.0-cp312-cp312-linux_x86_64.whl
不需要安装 requirements.txt,特别是里面有torch==2.2.0+cpu
,会卸载掉支持 GPU 的 torch。
--extra-index-url https://download.pytorch.org/whl/cpu
torch==2.2.0+cpu
3. 使用 omlutils 创建 onnx 模型
安装 sentencepiece
,
pip install sentencepiece
修复omlutils部分代码使其支持支持大于1GB,是tokenizer是XLMRobertaTokenizer的模型。
vi /home/oracle/miniconda/envs/learn-oracle23c/lib/python3.12/site-packages/omlutils/_pipeline/steps.py--- beforesize_threshold = quant_limit if is_quantized else 0.99e9
------ aftersize_threshold = quant_limit if is_quantized else 0.99e9 * 5
--- after
vi /home/oracle/miniconda/envs/learn-oracle23c/lib/python3.12/site-packages/omlutils/_pipeline/steps.py--- beforedef validateBertTokenizer(self,tokenizer):supportedTokenizer=[transformers.models.bert.BertTokenizer, transformers.models.distilbert.DistilBertTokenizer,transformers.models.mpnet.MPNetTokenizer]cls=tokenizer.__class__if(cls not in supportedTokenizer):raise ValueError(f"Unsupported tokenizer {cls}")
------ afterdef validateBertTokenizer(self,tokenizer):supportedTokenizer=[transformers.models.bert.BertTokenizer, transformers.models.distilbert.DistilBertTokenizer,transformers.models.mpnet.MPNetTokenizer,transformers.models.xlm_roberta.tokenization_xlm_roberta.XLMRobertaTokenizer]cls=tokenizer.__class__if(cls not in supportedTokenizer):raise ValueError(f"Unsupported tokenizer {cls}")
--- after
vi /home/oracle/miniconda/envs/learn-oracle23c/lib/python3.12/site-packages/omlutils/_onnx_export/tokenizer_export.py--- before
TOKENIZER_MAPPING = {transformers.models.bert.BertTokenizer: SupportedTokenizers.BERT,transformers.models.clip.CLIPTokenizer: SupportedTokenizers.CLIP,transformers.models.distilbert.DistilBertTokenizer: SupportedTokenizers.BERT,transformers.models.gpt2.GPT2Tokenizer: SupportedTokenizers.GPT2,#transformers.models.llama.LlamaTokenizer: SupportedTokenizers.SENTENCEPIECE,# transformers.models.mluke.MLukeTokenizer: SupportedTokenizers.SENTENCEPIECE,transformers.models.mpnet.MPNetTokenizer: SupportedTokenizers.BERT,# transformers.models.roberta.tokenization_roberta.RobertaTokenizer: SupportedTokenizers.ROBERTA,# transformers.models.xlm_roberta.XLMRobertaTokenizer: SupportedTokenizers.SENTENCEPIECE,
}
------ after
TOKENIZER_MAPPING = {transformers.models.bert.BertTokenizer: SupportedTokenizers.BERT,transformers.models.clip.CLIPTokenizer: SupportedTokenizers.CLIP,transformers.models.distilbert.DistilBertTokenizer: SupportedTokenizers.BERT,transformers.models.gpt2.GPT2Tokenizer: SupportedTokenizers.GPT2,#transformers.models.llama.LlamaTokenizer: SupportedTokenizers.SENTENCEPIECE,# transformers.models.mluke.MLukeTokenizer: SupportedTokenizers.SENTENCEPIECE,transformers.models.mpnet.MPNetTokenizer: SupportedTokenizers.BERT,# transformers.models.roberta.tokenization_roberta.RobertaTokenizer: SupportedTokenizers.ROBERTA,transformers.models.xlm_roberta.XLMRobertaTokenizer: SupportedTokenizers.SENTENCEPIECE,
}
---
创建 multilingual_e5_small.py
,内容如下,
from omlutils import EmbeddingModel, EmbeddingModelConfig
print(f"start...")
config = EmbeddingModelConfig.from_template("text", max_seq_length=512)
em = EmbeddingModel(model_name="intfloat/multilingual-e5-small", config=config)
em.export2file("multilingual_e5_small", output_dir=".")
print(f"complete...")
创建 onnx 模型,
python multilingual_e5_small.py
程序执行完成后,会创建一个 multilingual_e5_small.onnx
文件。
(可选)升级transformers,
pip install -U transformers
完结!