本文不生产技术,只做技术的搬运工!
前言
目前网上能够找到的资料有限,要么收费,要么配置复杂,作者主打一个一毛不拔,决定自己动手实现一个,功能清单受启发于Nidia AI lab实验室的nanodb项目,打算开发一个可以实现以文搜图和以图搜图的demo,由于作者非科班出身,代码知识面较窄,因此未实现网页功能,仅提供demo,具体业务功能大家需要自行编写。
实现思路
整体思路如下图所示,我们先将图像使用clip生成其对应的特征向量存入数据库当中,然后通过图像输入或者文本输入进行查询,需要注意,图像和文本输入有一项即可。
环境配置
Chinese-Clip:GitHub - OFA-Sys/Chinese-CLIP: Chinese version of CLIP which achieves Chinese cross-modal retrieval and representation generation.Chinese version of CLIP which achieves Chinese cross-modal retrieval and representation generation. - OFA-Sys/Chinese-CLIPhttps://github.com/OFA-Sys/Chinese-CLIP%C2%A0%C2%A0
milvus:
pip install -U pymilvus
pytorch:
pip install torch==1.13.0+cu117 torchvision==0.14.0+cu117 torchaudio==0.13.0 --extra-index-url https://download.pytorch.org/whl/cu117
源码
数据入库
import torch
from PIL import Image
import cn_clip.clip as clip
from cn_clip.clip import load_from_name, available_models
import numpy as np
import os
from pymilvus import MilvusClient
client = MilvusClient("/home/project_python/Chinese-CLIP/my_database/coco2017.db")
if client.has_collection(collection_name="text_image"):client.drop_collection(collection_name="text_image")
client.create_collection(collection_name="text_image",dimension=512, # The vectors we will use in this demo has 768 dimensionsmetric_type="COSINE"
)
print("Available models:", available_models())def getFileList(dir, Filelist, ext=None):"""获取文件夹及其子文件夹中文件列表输入 dir:文件夹根目录输入 ext: 扩展名返回: 文件路径列表"""newDir = dirif os.path.isfile(dir):if ext is None:Filelist.append(dir)else:if ext in dir:Filelist.append(dir)elif os.path.isdir(dir):for s in os.listdir(dir):newDir = os.path.join(dir, s)getFileList(newDir, Filelist, ext)return Filelistif __name__ == "__main__":device = "cuda" if torch.cuda.is_available() else "cpu"model, preprocess = load_from_name("ViT-B-16", device=device, download_root='./')model.eval()img_dir = r"/home/project_python/Chinese-CLIP/my_dataset/coco"image_path_list = []image_path_list = getFileList(img_dir, image_path_list, '.jpg')data = []i = 0for image_path in image_path_list:temp = {}image = preprocess(Image.open(image_path)).unsqueeze(0).to(device)with torch.no_grad():image_features = model.encode_image(image)# 对特征进行归一化,请使用归一化后的图文特征用于下游任务image_features /= image_features.norm(dim=-1, keepdim=True)image_features = image_features.cpu().numpy().astype(np.float32).flatten()# 将特征向量转换为字符串#features_str = ','.join(map(str, image_features.flatten()))temp['id'] = itemp['image_path'] = image_pathtemp['vector'] = image_featuresdata.append(temp)i = i + 1print(i)res = client.insert(collection_name="text_image", data=data)
上述代码会在指定路径生成一个coco2017.db的文件,这就说明数据完成了入库,我们接下来进行调用
数据查询
import torch
from PIL import Image,ImageDraw, ImageFont
import cn_clip.clip as clip
from cn_clip.clip import load_from_name, available_models
import numpy as np
import time
from pymilvus import MilvusClient
client = MilvusClient("/home/project_python/Chinese-CLIP/my_database/coco2017.db")
print("Available models:", available_models())
# Available models: ['ViT-B-16', 'ViT-L-14', 'ViT-L-14-336', 'ViT-H-14', 'RN50']def display_single_image_with_text(image_path):with Image.open(image_path) as img:draw = ImageDraw.Draw(img)# 设置字体和字号,这里假设你有一个可用的字体文件,例如 Arial.ttf# 如果没有,可以使用系统默认字体try:font = ImageFont.truetype("Arial.ttf", 30)except IOError:font = ImageFont.load_default()# 文本内容和颜色text = "Example image"text_color = (255, 0, 0) # 红色# 文本位置text_position = (10, 10)# 绘制文本draw.text(text_position, text, fill=text_color, font=font)# 显示图像img.show()def display_images_in_grid(image_paths, images_per_row=3):# 计算需要的行数num_images = len(image_paths)num_rows = (num_images + images_per_row - 1) // images_per_row# 打开所有图像并调整大小images = []for path in image_paths:with Image.open(path) as img:img = img.resize((200, 200)) # 调整图像大小以适应画布images.append(img)# 创建一个空白画布canvas_width = images_per_row * 200canvas_height = num_rows * 200canvas = Image.new('RGB', (canvas_width, canvas_height), (255, 255, 255))# 将图像粘贴到画布上for idx, img in enumerate(images):row = idx // images_per_rowcol = idx % images_per_rowposition = (col * 200, row * 200)canvas.paste(img, position)# 显示画布canvas.show()def load_model(device):model, preprocess = load_from_name("ViT-B-16", device=device, download_root='./')model.eval()return model, preprocessdef text_encode(text,device):new_text = clip.tokenize([text]).to(device)with torch.no_grad():text_features = model.encode_text(new_text)text_features /= text_features.norm(dim=-1, keepdim=True)text_features = text_features.cpu().numpy().astype(np.float32)return text_featuresdef image_encode(model,preprocess,image_path,device):image = preprocess(Image.open(image_path)).unsqueeze(0).to(device)with torch.no_grad():image_features = model.encode_image(image)image_features /= image_features.norm(dim=-1, keepdim=True)image_features = image_features.cpu().numpy().astype(np.float32)return image_featuresif __name__ == "__main__":search_text = "猫"search_image_path = "/home/project_python/Chinese-CLIP/my_dataset/coco/val2017/000000000285.jpg"device = "cuda" if torch.cuda.is_available() else "cpu"model, preprocess = load_model(device)text_flag = Falseif text_flag:text_features = text_encode(search_text,device)results = client.search("text_image",data=text_features,output_fields=["image_path"],search_params={"metric_type": "COSINE"},limit=36)else:display_single_image_with_text(search_image_path)image_features = image_encode(model,preprocess,search_image_path,device)results = client.search("text_image",data=image_features,output_fields=["image_path"],search_params={"metric_type": "COSINE"},limit=36)image_list = []for i,result in enumerate(results[0]):image_list.append(result["entity"]["image_path"])display_images_in_grid(image_list,9)
上述代码使用text_flag控制是以文搜图还是以图搜图,True时为以文搜图,False时为以图搜图
实现效果
以文搜图
以图搜图
示例图像:
搜索结果:
附加
1. Chinese-Clip开放了onnx和trt推理,大家可以根据自己的需求改进,参考链接如下:
FoundationModel/Chinese-CLIP: 本项目为CLIP模型的中文版本,使用大规模中文数据进行训练(~2亿图文对),旨在帮助用户快速实现中文领域的图文特征&相似度计算、跨模态检索、零样本图片分类等任务 - deployment.md at master - Chinese-CLIP - OpenI - 启智AI开源社区提供普惠算力!Chinese-CLIP - 本项目为CLIP模型的中文版本,使用大规模中文数据进行训练(~2亿图文对),旨在帮助用户快速实现中文领域的图文特征&相似度计算、跨模态检索、零样本图片分类等任务https://openi.pcl.ac.cn/FoundationModel/Chinese-CLIP/src/branch/master/deployment.md
2. 大家也可以根据自己的时间安排复现下面的项目,作者这里只是demo,上不了台面,参考链接如下:
https://github.com/jackyzha0/nanoDBhttps://github.com/jackyzha0/nanoDB