Ollama 网址下载
再cmd,用 library 里面的库
英文对话:
Gemma is available in both 2b
and 7b
parameter sizes:
ollama run gemma:2b
ollama run gemma:7b
(default)
中文对话
ollama run qwen:0.5b
ollama run qwen:1.8b
用vscode而不是cmd调用 D:\Research\flower_exif\gemma_local.py
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain_community.llms.ollama import Ollamallm = Ollama(base_url="http://localhost:11434",model="qwen:1.8b",)def get_completion_ollama(prompt):return llm.invoke(prompt)prompt = 'could you describe a storm weather'
res = get_completion_ollama(prompt=prompt)
print(res)