基础任务:使用 XTuner 微调 InternLM2-Chat-7B 实现自己的小助手认知,如下图所示(图中的尖米需替换成自己的昵称),记录复现过程并截图。
1.配置环境
2.修改数据,将尖米修改为人工智能小助手
修改之前
修改之后
3.训练
在开始训练的过程中,由于环境搭建的有点问题导致一个错误
(xtuner-env) root@intern-studio-17003771:~/finetune# xtuner train ./config/internlm2_5_chat_7b_qlora_alpaca_e3_copy.py --deepspeed deepspeed_zero2 --work-dir ./work_dirs/assistTuner /root/.conda/envs/xtuner-env/lib/python3.10/site-packages/mmengine/optim/optimizer/zero_optimizer.py:11: DeprecationWarning: `TorchScript` support for functional optimizers is deprecated and will be removed in a future PyTorch release. Consider using the `torch.compile` optimizer instead. from torch.distributed.optim import \
[2025-01-16 12:36:56,087] [INFO] [real_accelerator.py:191:get_accelerator] Setting ds_accelerator to cuda (auto detect) /root/.conda/envs/xtuner-env/lib/python3.10/site-packages/deepspeed/runtime/zero/linear.py:49: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead. def forward(ctx, input, weight, bias=None): /root/.conda/envs/xtuner-env/lib/python3.10/site-packages/deepspeed/runtime/zero/linear.py:67: FutureWarning: `torch.cuda.amp.custom_bwd(args...)` is deprecated. Please use `torch.amp.custom_bwd(args..., device_type='cuda')` instead. def backward(ctx, grad_output): 01/16 12:37:10 - mmengine - WARNING - WARNING: command error: 'cannot import name 'log' from 'torch.distributed.elastic.agent.server.api' (/root/.conda/envs/xtuner-env/lib/python3.10/site-packages/torch/distributed/elastic/agent/server/api.py)'!
重新装一次这个问题就解决了
训练过程
迭代500轮次的模型回答:
迭代864轮次的模型回答:
4.权重转换与合并
通过一下命令进行合并文件
模型合并
5.模型WEBUI对话
出现的错误
老是显示页面打不开,要注意自己的电脑的端口连接,还以为是环境出问题了,其实不是。