专栏总目录
按照ollama官方doc的example操作,没有成功与本地ollama上的deepseek-r1:1.5b通讯后,发现vscode可以调用本地ollama上的deepseek模型。
为了实现与ollama上的deepseek模型通讯,我使用wireshark对本地回环地址进行侦听后,发现了具体方法。
一、研究过程
vscode上提问,给了如下回答
如下图所示,上传的内容及参数,关键字在相关内容的的位置
二、对话脚本
使用python脚本编辑后如下所示
import socket
import json
import jsondef get_content(str_data):#print(str_data)content_str = ''for i in str_data.splitlines():if 'think' in i:continueelif '"content"' in i:# 给定的JSON字符串json_string = i# 解析JSON字符串data = json.loads(json_string)# 提取content的值content = data['message']['content']content_str = content_str + content.replace('/n', '')stripped_string = content_str.lstrip('\n')return stripped_stringdef send_post_request(host, port, path, headers, body):# 创建一个socket对象with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:# 连接到服务器s.connect((host, port))# 构造HTTP请求request = f"POST {path} HTTP/1.1\r\n"for header, value in headers.items():request += f"{header}: {value}\r\n"request += "\r\n" # 空行表示头部结束request += body# 发送HTTP请求s.sendall(request.encode('utf-8'))# 接收响应response = b''while True:part = s.recv(1024)if not part:breakresponse += partreturn responsedef main():# 服务器的地址和端口host = '127.0.0.1'port = 11434path = "/api/chat"while True:question = input('>>>')# 构造请求数据data = {"model": "deepseek-r1:1.5b","messages": [{"role": "user", "content": question}],"options": {"num_predict": 4096,"stop": ["<...begin...of...sentence...>", "<...end...of...sentence...>", "<...User...>", "<...Assistant...>"],"num_ctx": 8096},"keep_alive": 1800}body = json.dumps(data)# 构造HTTP头部headers = {"Accept": "*/*","Accept-Encoding": "gzip, deflate, br","Authorization": "Bearer undefined","Content-Length": str(len(body)),"Content-Type": "application/json","User-Agent": "node-fetch","Host": "127.0.0.1:11434","Connection": "close"}# 发送POST请求并获取响应response = send_post_request(host, port, path, headers, body)print("【deepseek】: ",end='')#print(response.decode('utf-8'))mess = get_content(response.decode('utf-8'))print(mess)if __name__ == '__main__':main()
具体切片、美化步骤再此省略。