paddle ocr v4 2.6.1实战笔记

目录

效果图:

安装

模型权重是自动下载,如果提前下载会报错。

识别orc,并opencv可视化结果,支持中文可视化

官方原版预测可视化:


效果图:

安装

安装2.5.2识别结果为空

pip install paddlepaddle-gpu==2.6.1

模型权重是自动下载,如果提前下载会报错。

测试代码:


import os
import time
from paddleocr import PaddleOCRfilepath = r"weights/123.jpg"ocr_model = PaddleOCR(use_angle_cls=True, lang="ch", use_gpu=True, show_log=1,det_db_box_thresh=0.1, use_dilation=True,det_model_dir='weight/ch_PP-OCRv4_det_server_infer.tar',cls_model_dir='weight/ch_ppocr_mobile_v2.0_cls_infer.tar',rec_model_dir='weight/ch_PP-OCRv4_rec_server_infer.tar')t1 = time.time()
for i in range(1):result = ocr_model.ocr(img=filepath, det=True, rec=True, cls=True)[0]
t2 = time.time()
print((t2-t1) / 10)for res_str in result:print(res_str)

识别orc,并opencv可视化结果,支持中文可视化

import codecs
import os
import timeimport cv2
import numpy as np
from PIL import ImageFont
from PIL import Image
from PIL import ImageDrawfrom paddleocr import PaddleOCRfilepath = r"weights/124.jpg"ocr_model = PaddleOCR(use_angle_cls=True, lang="ch", use_gpu=True, show_log=1,det_db_box_thresh=0.1, use_dilation=True,det_model_dir='weight/ch_PP-OCRv4_det_server_infer.tar',cls_model_dir='weight/ch_ppocr_mobile_v2.0_cls_infer.tar',rec_model_dir='weight/ch_PP-OCRv4_rec_server_infer.tar')t1 = time.time()
for i in range(1):result = ocr_model.ocr(img=filepath, det=True, rec=True, cls=True)[0]
t2 = time.time()
print((t2-t1) / 10)font_path = 'simhei.ttf'  # 需要替换为你的中文字体路径
font = ImageFont.truetype(font_path, 24)
def cv2AddChineseText(img, text, position, textColor=(0, 255, 0), textSize=30):img = Image.fromarray(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))draw = ImageDraw.Draw(img)draw.text(position, text, textColor, font=font)return cv2.cvtColor(np.asarray(img), cv2.COLOR_RGB2BGR)image=cv2.imread(filepath)ocr_index=0
for res_str in result:if res_str[0][0][0]>36 and res_str[0][2][0]<84:print(ocr_index,res_str)points=res_str[0]text = res_str[1][0]points = np.array(points, dtype=np.int32).reshape((-1, 1, 2))cv2.polylines(image, [points], isClosed=True, color=(255, 0, 0), thickness=2)text_position = (int(points[0][0][0]), int(points[0][0][1] + 20))  # 微调文本位置# cv2.putText(image, '中文文本', (50, 100), cv2.FONT_HERSHEY_SIMPLEX, 2, (255, 255, 255), 3)image= cv2AddChineseText(image, text, text_position, textColor=(0, 255, 0), textSize=30)print(ocr_index)if res_str[0][0][0]>346 and res_str[0][2][0]<391:print(ocr_index,res_str)points=res_str[0]text = res_str[1][0]points = np.array(points, dtype=np.int32).reshape((-1, 1, 2))cv2.polylines(image, [points], isClosed=True, color=(255, 0, 0), thickness=2)text_position = (int(points[0][0][0]), int(points[0][0][1] + 20))  # 微调文本位置# cv2.putText(image, '中文文本', (50, 100), cv2.FONT_HERSHEY_SIMPLEX, 2, (255, 255, 255), 3)image= cv2AddChineseText(image, text, text_position, textColor=(0, 255, 0), textSize=30)if res_str[0][0][0]>658 and res_str[0][2][0]<705:print(ocr_index,res_str)points=res_str[0]text=res_str[1][0]points=np.array(points,dtype=np.int32).reshape((-1, 1, 2))cv2.polylines(image, [points], isClosed=True, color=(255, 0, 0), thickness=2)text_position = (int(points[0][0][0]), int(points[0][0][1] + 20))  # 微调文本位置image= cv2AddChineseText(image, text, text_position, textColor=(0, 255, 0), textSize=30)cv2.imshow('Image with Rectangle and Text', image)
cv2.waitKey(0)

官方原版预测可视化:

# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.import os
import sys
import importlib__dir__ = os.path.dirname(__file__)import paddle
from paddle.utils import try_importsys.path.append(os.path.join(__dir__, ""))import cv2
import logging
import numpy as np
from pathlib import Path
import base64
from io import BytesIO
from PIL import Image, ImageFont, ImageDraw
from tools.infer import predict_systemdef _import_file(module_name, file_path, make_importable=False):spec = importlib.util.spec_from_file_location(module_name, file_path)module = importlib.util.module_from_spec(spec)spec.loader.exec_module(module)if make_importable:sys.modules[module_name] = modulereturn moduletools = _import_file("tools", os.path.join(__dir__, "tools/__init__.py"), make_importable=True)
ppocr = importlib.import_module("ppocr", "paddleocr")
ppstructure = importlib.import_module("ppstructure", "paddleocr")
from ppocr.utils.logging import get_loggerlogger = get_logger()
from ppocr.utils.utility import (check_and_read, get_image_file_list, alpha_to_color, binarize_img, )
from ppocr.utils.network import (maybe_download, download_with_progressbar, is_link, confirm_model_dir_url, )
from tools.infer.utility import draw_ocr, str2bool, check_gpu
from ppstructure.utility import init_args, draw_structure_result
from ppstructure.predict_system import StructureSystem, save_structure_res, to_excellogger = get_logger()
__all__ = ["PaddleOCR", "PPStructure", "draw_ocr", "draw_structure_result", "save_structure_res", "download_with_progressbar", "to_excel", ]SUPPORT_DET_MODEL = ["DB"]
VERSION = "2.8.0"
SUPPORT_REC_MODEL = ["CRNN", "SVTR_LCNet"]
BASE_DIR = os.path.expanduser("~/.paddleocr/")DEFAULT_OCR_MODEL_VERSION = "PP-OCRv4"
SUPPORT_OCR_MODEL_VERSION = ["PP-OCR", "PP-OCRv2", "PP-OCRv3", "PP-OCRv4"]
DEFAULT_STRUCTURE_MODEL_VERSION = "PP-StructureV2"
SUPPORT_STRUCTURE_MODEL_VERSION = ["PP-Structure", "PP-StructureV2"]
MODEL_URLS = {"OCR": {"PP-OCRv4": {"det": {"ch": {"url": "https://paddleocr.bj.bcebos.com/PP-OCRv4/chinese/ch_PP-OCRv4_det_infer.tar", }, "en": {"url": "https://paddleocr.bj.bcebos.com/PP-OCRv3/english/en_PP-OCRv3_det_infer.tar", },"ml": {"url": "https://paddleocr.bj.bcebos.com/PP-OCRv3/multilingual/Multilingual_PP-OCRv3_det_infer.tar"}, },"rec": {"ch": {"url": "https://paddleocr.bj.bcebos.com/PP-OCRv4/chinese/ch_PP-OCRv4_rec_infer.tar", "dict_path": "./ppocr/utils/ppocr_keys_v1.txt", }, "en": {"url": "https://paddleocr.bj.bcebos.com/PP-OCRv4/english/en_PP-OCRv4_rec_infer.tar", "dict_path": "./ppocr/utils/en_dict.txt", },"korean": {"url": "https://paddleocr.bj.bcebos.com/PP-OCRv4/multilingual/korean_PP-OCRv4_rec_infer.tar", "dict_path": "./ppocr/utils/dict/korean_dict.txt", },"japan": {"url": "https://paddleocr.bj.bcebos.com/PP-OCRv4/multilingual/japan_PP-OCRv4_rec_infer.tar", "dict_path": "./ppocr/utils/dict/japan_dict.txt", },"chinese_cht": {"url": "https://paddleocr.bj.bcebos.com/PP-OCRv3/multilingual/chinese_cht_PP-OCRv3_rec_infer.tar", "dict_path": "./ppocr/utils/dict/chinese_cht_dict.txt", },"ta": {"url": "https://paddleocr.bj.bcebos.com/PP-OCRv4/multilingual/ta_PP-OCRv4_rec_infer.tar", "dict_path": "./ppocr/utils/dict/ta_dict.txt", },"te": {"url": "https://paddleocr.bj.bcebos.com/PP-OCRv4/multilingual/te_PP-OCRv4_rec_infer.tar", "dict_path": "./ppocr/utils/dict/te_dict.txt", },"ka": {"url": "https://paddleocr.bj.bcebos.com/PP-OCRv4/multilingual/ka_PP-OCRv4_rec_infer.tar", "dict_path": "./ppocr/utils/dict/ka_dict.txt", },"latin": {"url": "https://paddleocr.bj.bcebos.com/PP-OCRv3/multilingual/latin_PP-OCRv3_rec_infer.tar", "dict_path": "./ppocr/utils/dict/latin_dict.txt", },"arabic": {"url": "https://paddleocr.bj.bcebos.com/PP-OCRv4/multilingual/arabic_PP-OCRv4_rec_infer.tar", "dict_path": "./ppocr/utils/dict/arabic_dict.txt", },"cyrillic": {"url": "https://paddleocr.bj.bcebos.com/PP-OCRv3/multilingual/cyrillic_PP-OCRv3_rec_infer.tar", "dict_path": "./ppocr/utils/dict/cyrillic_dict.txt", },"devanagari": {"url": "https://paddleocr.bj.bcebos.com/PP-OCRv4/multilingual/devanagari_PP-OCRv4_rec_infer.tar", "dict_path": "./ppocr/utils/dict/devanagari_dict.txt", }, }, "cls": {"ch": {"url": "https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar", }}, },"PP-OCRv3": {"det": {"ch": {"url": "https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_det_infer.tar", }, "en": {"url": "https://paddleocr.bj.bcebos.com/PP-OCRv3/english/en_PP-OCRv3_det_infer.tar", },"ml": {"url": "https://paddleocr.bj.bcebos.com/PP-OCRv3/multilingual/Multilingual_PP-OCRv3_det_infer.tar"}, },"rec": {"ch": {"url": "https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_rec_infer.tar", "dict_path": "./ppocr/utils/ppocr_keys_v1.txt", }, "en": {"url": "https://paddleocr.bj.bcebos.com/PP-OCRv3/english/en_PP-OCRv3_rec_infer.tar", "dict_path": "./ppocr/utils/en_dict.txt", },"korean": {"url": "https://paddleocr.bj.bcebos.com/PP-OCRv3/multilingual/korean_PP-OCRv3_rec_infer.tar", "dict_path": "./ppocr/utils/dict/korean_dict.txt", },"japan": {"url": "https://paddleocr.bj.bcebos.com/PP-OCRv3/multilingual/japan_PP-OCRv3_rec_infer.tar", "dict_path": "./ppocr/utils/dict/japan_dict.txt", },"chinese_cht": {"url": "https://paddleocr.bj.bcebos.com/PP-OCRv3/multilingual/chinese_cht_PP-OCRv3_rec_infer.tar", "dict_path": "./ppocr/utils/dict/chinese_cht_dict.txt", },"ta": {"url": "https://paddleocr.bj.bcebos.com/PP-OCRv3/multilingual/ta_PP-OCRv3_rec_infer.tar", "dict_path": "./ppocr/utils/dict/ta_dict.txt", },"te": {"url": "https://paddleocr.bj.bcebos.com/PP-OCRv3/multilingual/te_PP-OCRv3_rec_infer.tar", "dict_path": "./ppocr/utils/dict/te_dict.txt", },"ka": {"url": "https://paddleocr.bj.bcebos.com/PP-OCRv3/multilingual/ka_PP-OCRv3_rec_infer.tar", "dict_path": "./ppocr/utils/dict/ka_dict.txt", },"latin": {"url": "https://paddleocr.bj.bcebos.com/PP-OCRv3/multilingual/latin_PP-OCRv3_rec_infer.tar", "dict_path": "./ppocr/utils/dict/latin_dict.txt", },"arabic": {"url": "https://paddleocr.bj.bcebos.com/PP-OCRv3/multilingual/arabic_PP-OCRv3_rec_infer.tar", "dict_path": "./ppocr/utils/dict/arabic_dict.txt", },"cyrillic": {"url": "https://paddleocr.bj.bcebos.com/PP-OCRv3/multilingual/cyrillic_PP-OCRv3_rec_infer.tar", "dict_path": "./ppocr/utils/dict/cyrillic_dict.txt", },"devanagari": {"url": "https://paddleocr.bj.bcebos.com/PP-OCRv3/multilingual/devanagari_PP-OCRv3_rec_infer.tar", "dict_path": "./ppocr/utils/dict/devanagari_dict.txt", }, }, "cls": {"ch": {"url": "https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar", }}, },"PP-OCRv2": {"det": {"ch": {"url": "https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_det_infer.tar", }, }, "rec": {"ch": {"url": "https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_rec_infer.tar", "dict_path": "./ppocr/utils/ppocr_keys_v1.txt", }},"cls": {"ch": {"url": "https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar", }}, }, "PP-OCR": {"det": {"ch": {"url": "https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_det_infer.tar", }, "en": {"url": "https://paddleocr.bj.bcebos.com/dygraph_v2.0/multilingual/en_ppocr_mobile_v2.0_det_infer.tar", },"structure": {"url": "https://paddleocr.bj.bcebos.com/dygraph_v2.0/table/en_ppocr_mobile_v2.0_table_det_infer.tar"}, }, "rec": {"ch": {"url": "https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_rec_infer.tar", "dict_path": "./ppocr/utils/ppocr_keys_v1.txt", },"en": {"url": "https://paddleocr.bj.bcebos.com/dygraph_v2.0/multilingual/en_number_mobile_v2.0_rec_infer.tar", "dict_path": "./ppocr/utils/en_dict.txt", },"french": {"url": "https://paddleocr.bj.bcebos.com/dygraph_v2.0/multilingual/french_mobile_v2.0_rec_infer.tar", "dict_path": "./ppocr/utils/dict/french_dict.txt", },"german": {"url": "https://paddleocr.bj.bcebos.com/dygraph_v2.0/multilingual/german_mobile_v2.0_rec_infer.tar", "dict_path": "./ppocr/utils/dict/german_dict.txt", },"korean": {"url": "https://paddleocr.bj.bcebos.com/dygraph_v2.0/multilingual/korean_mobile_v2.0_rec_infer.tar", "dict_path": "./ppocr/utils/dict/korean_dict.txt", },"japan": {"url": "https://paddleocr.bj.bcebos.com/dygraph_v2.0/multilingual/japan_mobile_v2.0_rec_infer.tar", "dict_path": "./ppocr/utils/dict/japan_dict.txt", },"chinese_cht": {"url": "https://paddleocr.bj.bcebos.com/dygraph_v2.0/multilingual/chinese_cht_mobile_v2.0_rec_infer.tar", "dict_path": "./ppocr/utils/dict/chinese_cht_dict.txt", },"ta": {"url": "https://paddleocr.bj.bcebos.com/dygraph_v2.0/multilingual/ta_mobile_v2.0_rec_infer.tar", "dict_path": "./ppocr/utils/dict/ta_dict.txt", },"te": {"url": "https://paddleocr.bj.bcebos.com/dygraph_v2.0/multilingual/te_mobile_v2.0_rec_infer.tar", "dict_path": "./ppocr/utils/dict/te_dict.txt", },"ka": {"url": "https://paddleocr.bj.bcebos.com/dygraph_v2.0/multilingual/ka_mobile_v2.0_rec_infer.tar", "dict_path": "./ppocr/utils/dict/ka_dict.txt", },"latin": {"url": "https://paddleocr.bj.bcebos.com/dygraph_v2.0/multilingual/latin_ppocr_mobile_v2.0_rec_infer.tar", "dict_path": "./ppocr/utils/dict/latin_dict.txt", },"arabic": {"url": "https://paddleocr.bj.bcebos.com/dygraph_v2.0/multilingual/arabic_ppocr_mobile_v2.0_rec_infer.tar", "dict_path": "./ppocr/utils/dict/arabic_dict.txt", },"cyrillic": {"url": "https://paddleocr.bj.bcebos.com/dygraph_v2.0/multilingual/cyrillic_ppocr_mobile_v2.0_rec_infer.tar", "dict_path": "./ppocr/utils/dict/cyrillic_dict.txt", },"devanagari": {"url": "https://paddleocr.bj.bcebos.com/dygraph_v2.0/multilingual/devanagari_ppocr_mobile_v2.0_rec_infer.tar", "dict_path": "./ppocr/utils/dict/devanagari_dict.txt", },"structure": {"url": "https://paddleocr.bj.bcebos.com/dygraph_v2.0/table/en_ppocr_mobile_v2.0_table_rec_infer.tar", "dict_path": "ppocr/utils/dict/table_dict.txt", }, }, "cls": {"ch": {"url": "https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar", }}, }, },"STRUCTURE": {"PP-Structure": {"table": {"en": {"url": "https://paddleocr.bj.bcebos.com/dygraph_v2.0/table/en_ppocr_mobile_v2.0_table_structure_infer.tar", "dict_path": "ppocr/utils/dict/table_structure_dict.txt", }}}, "PP-StructureV2": {"table": {"en": {"url": "https://paddleocr.bj.bcebos.com/ppstructure/models/slanet/en_ppstructure_mobile_v2.0_SLANet_infer.tar", "dict_path": "ppocr/utils/dict/table_structure_dict.txt", },"ch": {"url": "https://paddleocr.bj.bcebos.com/ppstructure/models/slanet/ch_ppstructure_mobile_v2.0_SLANet_infer.tar", "dict_path": "ppocr/utils/dict/table_structure_dict_ch.txt", }, },"layout": {"en": {"url": "https://paddleocr.bj.bcebos.com/ppstructure/models/layout/picodet_lcnet_x1_0_fgd_layout_infer.tar", "dict_path": "ppocr/utils/dict/layout_dict/layout_publaynet_dict.txt", },"ch": {"url": "https://paddleocr.bj.bcebos.com/ppstructure/models/layout/picodet_lcnet_x1_0_fgd_layout_cdla_infer.tar", "dict_path": "ppocr/utils/dict/layout_dict/layout_cdla_dict.txt", }, }, }, }, }def parse_args(mMain=True):import argparseparser = init_args()parser.add_help = mMainparser.add_argument("--lang", type=str, default="ch")parser.add_argument("--det", type=str2bool, default=True)parser.add_argument("--rec", type=str2bool, default=True)parser.add_argument("--type", type=str, default="ocr")parser.add_argument("--savefile", type=str2bool, default=False)parser.add_argument("--ocr_version", type=str, choices=SUPPORT_OCR_MODEL_VERSION, default="PP-OCRv4", help="OCR Model version, the current model support list is as follows: ""1. PP-OCRv4/v3 Support Chinese and English detection and recognition model, and direction classifier model""2. PP-OCRv2 Support Chinese detection and recognition model. ""3. PP-OCR support Chinese detection, recognition and direction classifier and multilingual recognition model.", )parser.add_argument("--structure_version", type=str, choices=SUPPORT_STRUCTURE_MODEL_VERSION, default="PP-StructureV2", help="Model version, the current model support list is as follows:"" 1. PP-Structure Support en table structure model."" 2. PP-StructureV2 Support ch and en table structure model.", )for action in parser._actions:if action.dest in ["rec_char_dict_path", "table_char_dict_path", "layout_dict_path", ]:action.default = Noneif mMain:return parser.parse_args()else:inference_args_dict = {}for action in parser._actions:inference_args_dict[action.dest] = action.defaultreturn argparse.Namespace(**inference_args_dict)def parse_lang(lang):latin_lang = ["af", "az", "bs", "cs", "cy", "da", "de", "es", "et", "fr", "ga", "hr", "hu", "id", "is", "it", "ku", "la", "lt", "lv", "mi", "ms", "mt", "nl", "no", "oc", "pi", "pl", "pt", "ro", "rs_latin", "sk", "sl", "sq", "sv", "sw", "tl", "tr", "uz", "vi", "french", "german", ]arabic_lang = ["ar", "fa", "ug", "ur"]cyrillic_lang = ["ru", "rs_cyrillic", "be", "bg", "uk", "mn", "abq", "ady", "kbd", "ava", "dar", "inh", "che", "lbe", "lez", "tab", ]devanagari_lang = ["hi", "mr", "ne", "bh", "mai", "ang", "bho", "mah", "sck", "new", "gom", "sa", "bgc", ]if lang in latin_lang:lang = "latin"elif lang in arabic_lang:lang = "arabic"elif lang in cyrillic_lang:lang = "cyrillic"elif lang in devanagari_lang:lang = "devanagari"assert (lang in MODEL_URLS["OCR"][DEFAULT_OCR_MODEL_VERSION]["rec"]), "param lang must in {}, but got {}".format(MODEL_URLS["OCR"][DEFAULT_OCR_MODEL_VERSION]["rec"].keys(), lang)if lang == "ch":det_lang = "ch"elif lang == "structure":det_lang = "structure"elif lang in ["en", "latin"]:det_lang = "en"else:det_lang = "ml"return lang, det_langdef get_model_config(type, version, model_type, lang):if type == "OCR":DEFAULT_MODEL_VERSION = DEFAULT_OCR_MODEL_VERSIONelif type == "STRUCTURE":DEFAULT_MODEL_VERSION = DEFAULT_STRUCTURE_MODEL_VERSIONelse:raise NotImplementedErrormodel_urls = MODEL_URLS[type]if version not in model_urls:version = DEFAULT_MODEL_VERSIONif model_type not in model_urls[version]:if model_type in model_urls[DEFAULT_MODEL_VERSION]:version = DEFAULT_MODEL_VERSIONelse:logger.error("{} models is not support, we only support {}".format(model_type, model_urls[DEFAULT_MODEL_VERSION].keys()))sys.exit(-1)if lang not in model_urls[version][model_type]:if lang in model_urls[DEFAULT_MODEL_VERSION][model_type]:version = DEFAULT_MODEL_VERSIONelse:logger.error("lang {} is not support, we only support {} for {} models".format(lang, model_urls[DEFAULT_MODEL_VERSION][model_type].keys(), model_type, ))sys.exit(-1)return model_urls[version][model_type][lang]def img_decode(content: bytes):np_arr = np.frombuffer(content, dtype=np.uint8)return cv2.imdecode(np_arr, cv2.IMREAD_UNCHANGED)def check_img(img, alpha_color=(255, 255, 255)):"""Check the image data. If it is another type of image file, try to decode it into a numpy array.The inference network requires three-channel images, So the following channel conversions are donesingle channel image: Gray to RGB R←Y,G←Y,B←Yfour channel image: alpha_to_colorargs:img: image datafile format: jpg, png and other image formats that opencv can decode, as well as gif and pdf formatsstorage type: binary image, net image file, local image filealpha_color: Background color in images in RGBA formatreturn: numpy.array (h, w, 3) or list (p, h, w, 3) (p: page of pdf), boolean, boolean"""flag_gif, flag_pdf = False, Falseif isinstance(img, bytes):img = img_decode(img)if isinstance(img, str):# download net imageif is_link(img):download_with_progressbar(img, "tmp.jpg")img = "tmp.jpg"image_file = imgimg, flag_gif, flag_pdf = check_and_read(image_file)if not flag_gif and not flag_pdf:with open(image_file, "rb") as f:img_str = f.read()img = img_decode(img_str)if img is None:try:buf = BytesIO()image = BytesIO(img_str)im = Image.open(image)rgb = im.convert("RGB")rgb.save(buf, "jpeg")buf.seek(0)image_bytes = buf.read()data_base64 = str(base64.b64encode(image_bytes), encoding="utf-8")image_decode = base64.b64decode(data_base64)img_array = np.frombuffer(image_decode, np.uint8)img = cv2.imdecode(img_array, cv2.IMREAD_COLOR)except:logger.error("error in loading image:{}".format(image_file))return None, flag_gif, flag_pdfif img is None:logger.error("error in loading image:{}".format(image_file))return None, flag_gif, flag_pdf# single channel image array.shape:h,wif isinstance(img, np.ndarray) and len(img.shape) == 2:img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)# four channel image array.shape:h,w,cif isinstance(img, np.ndarray) and len(img.shape) == 3 and img.shape[2] == 4:img = alpha_to_color(img, alpha_color)return img, flag_gif, flag_pdfclass PaddleOCR(predict_system.TextSystem):def __init__(self, **kwargs):"""paddleocr packageargs:**kwargs: other params show in paddleocr --help"""params = parse_args(mMain=False)params.__dict__.update(**kwargs)assert (params.ocr_version in SUPPORT_OCR_MODEL_VERSION), "ocr_version must in {}, but get {}".format(SUPPORT_OCR_MODEL_VERSION, params.ocr_version)params.use_gpu = check_gpu(params.use_gpu)if not params.show_log:logger.setLevel(logging.INFO)self.use_angle_cls = params.use_angle_clslang, det_lang = parse_lang(params.lang)# init model dirdet_model_config = get_model_config("OCR", params.ocr_version, "det", det_lang)params.det_model_dir, det_url = confirm_model_dir_url(params.det_model_dir, os.path.join(BASE_DIR, "whl", "det", det_lang), det_model_config["url"], )rec_model_config = get_model_config("OCR", params.ocr_version, "rec", lang)params.rec_model_dir, rec_url = confirm_model_dir_url(params.rec_model_dir, os.path.join(BASE_DIR, "whl", "rec", lang), rec_model_config["url"], )cls_model_config = get_model_config("OCR", params.ocr_version, "cls", "ch")params.cls_model_dir, cls_url = confirm_model_dir_url(params.cls_model_dir, os.path.join(BASE_DIR, "whl", "cls"), cls_model_config["url"], )if params.ocr_version in ["PP-OCRv3", "PP-OCRv4"]:params.rec_image_shape = "3, 48, 320"else:params.rec_image_shape = "3, 32, 320"# download model if using paddle inferif not params.use_onnx:maybe_download(params.det_model_dir, det_url)maybe_download(params.rec_model_dir, rec_url)maybe_download(params.cls_model_dir, cls_url)if params.det_algorithm not in SUPPORT_DET_MODEL:logger.error("det_algorithm must in {}".format(SUPPORT_DET_MODEL))sys.exit(0)if params.rec_algorithm not in SUPPORT_REC_MODEL:logger.error("rec_algorithm must in {}".format(SUPPORT_REC_MODEL))sys.exit(0)if params.rec_char_dict_path is None:params.rec_char_dict_path = str(Path(__file__).parent / rec_model_config["dict_path"])logger.debug(params)# init det_model and rec_modelsuper().__init__(params)self.page_num = params.page_numdef ocr(self, img, det=True, rec=True, cls=True, bin=False, inv=False, alpha_color=(255, 255, 255), ):"""OCR with PaddleOCRargs:img: img for OCR, support ndarray, img_path and list or ndarraydet: use text detection or not. If False, only rec will be exec. Default is Truerec: use text recognition or not. If False, only det will be exec. Default is Truecls: use angle classifier or not. Default is True. If True, the text with rotation of 180 degrees can be recognized. If no text is rotated by 180 degrees, use cls=False to get better performance. Text with rotation of 90 or 270 degrees can be recognized even if cls=False.bin: binarize image to black and white. Default is False.inv: invert image colors. Default is False.alpha_color: set RGB color Tuple for transparent parts replacement. Default is pure white."""assert isinstance(img, (np.ndarray, list, str, bytes))if isinstance(img, list) and det == True:logger.error("When input a list of images, det must be false")exit(0)if cls == True and self.use_angle_cls == False:logger.warning("Since the angle classifier is not initialized, it will not be used during the forward process")img, flag_gif, flag_pdf = check_img(img, alpha_color)# for infer pdf fileif isinstance(img, list) and flag_pdf:if self.page_num > len(img) or self.page_num == 0:imgs = imgelse:imgs = img[: self.page_num]else:imgs = [img]def preprocess_image(_image):_image = alpha_to_color(_image, alpha_color)if inv:_image = cv2.bitwise_not(_image)if bin:_image = binarize_img(_image)return _imageif det and rec:ocr_res = []for idx, img in enumerate(imgs):img = preprocess_image(img)dt_boxes, rec_res, _ = self.__call__(img, cls)if not dt_boxes and not rec_res:ocr_res.append(None)continuetmp_res = [[box.tolist(), res] for box, res in zip(dt_boxes, rec_res)]ocr_res.append(tmp_res)return ocr_reselif det and not rec:ocr_res = []for idx, img in enumerate(imgs):img = preprocess_image(img)dt_boxes, elapse = self.text_detector(img)if dt_boxes.size == 0:ocr_res.append(None)continuetmp_res = [box.tolist() for box in dt_boxes]ocr_res.append(tmp_res)return ocr_reselse:ocr_res = []cls_res = []for idx, img in enumerate(imgs):if not isinstance(img, list):img = preprocess_image(img)img = [img]if self.use_angle_cls and cls:img, cls_res_tmp, elapse = self.text_classifier(img)if not rec:cls_res.append(cls_res_tmp)rec_res, elapse = self.text_recognizer(img)ocr_res.append(rec_res)if not rec:return cls_resreturn ocr_resclass PPStructure(StructureSystem):def __init__(self, **kwargs):params = parse_args(mMain=False)params.__dict__.update(**kwargs)assert (params.structure_version in SUPPORT_STRUCTURE_MODEL_VERSION), "structure_version must in {}, but get {}".format(SUPPORT_STRUCTURE_MODEL_VERSION, params.structure_version)params.use_gpu = check_gpu(params.use_gpu)params.mode = "structure"if not params.show_log:logger.setLevel(logging.INFO)lang, det_lang = parse_lang(params.lang)if lang == "ch":table_lang = "ch"else:table_lang = "en"if params.structure_version == "PP-Structure":params.merge_no_span_structure = False# init model dirdet_model_config = get_model_config("OCR", params.ocr_version, "det", det_lang)params.det_model_dir, det_url = confirm_model_dir_url(params.det_model_dir, os.path.join(BASE_DIR, "whl", "det", det_lang), det_model_config["url"], )rec_model_config = get_model_config("OCR", params.ocr_version, "rec", lang)params.rec_model_dir, rec_url = confirm_model_dir_url(params.rec_model_dir, os.path.join(BASE_DIR, "whl", "rec", lang), rec_model_config["url"], )table_model_config = get_model_config("STRUCTURE", params.structure_version, "table", table_lang)params.table_model_dir, table_url = confirm_model_dir_url(params.table_model_dir, os.path.join(BASE_DIR, "whl", "table"), table_model_config["url"], )layout_model_config = get_model_config("STRUCTURE", params.structure_version, "layout", lang)params.layout_model_dir, layout_url = confirm_model_dir_url(params.layout_model_dir, os.path.join(BASE_DIR, "whl", "layout"), layout_model_config["url"], )# download modelif not params.use_onnx:maybe_download(params.det_model_dir, det_url)maybe_download(params.rec_model_dir, rec_url)maybe_download(params.table_model_dir, table_url)maybe_download(params.layout_model_dir, layout_url)if params.rec_char_dict_path is None:params.rec_char_dict_path = str(Path(__file__).parent / rec_model_config["dict_path"])if params.table_char_dict_path is None:params.table_char_dict_path = str(Path(__file__).parent / table_model_config["dict_path"])if params.layout_dict_path is None:params.layout_dict_path = str(Path(__file__).parent / layout_model_config["dict_path"])logger.debug(params)super().__init__(params)def __call__(self, img, return_ocr_result_in_table=False, img_idx=0, alpha_color=(255, 255, 255), ):img, flag_gif, flag_pdf = check_img(img, alpha_color)if isinstance(img, list) and flag_pdf:res_list = []for index, pdf_img in enumerate(img):logger.info("processing {}/{} page:".format(index + 1, len(img)))res, _ = super().__call__(pdf_img, return_ocr_result_in_table, img_idx=index)res_list.append(res)return res_listres, _ = super().__call__(img, return_ocr_result_in_table, img_idx=img_idx)return res
def cv2AddChineseText(img, text, position, textColor=(0, 255, 0), textSize=30):img = Image.fromarray(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))draw = ImageDraw.Draw(img)draw.text(position, text, textColor, font=font)return cv2.cvtColor(np.asarray(img), cv2.COLOR_RGB2BGR)if __name__ == '__main__':font_path = 'simhei.ttf'  # 需要替换为你的中文字体路径font = ImageFont.truetype(font_path, 24)# for cmdargs = parse_args(mMain=True)image_dir = args.image_dirimage_file_list=['weights/123.jpg']if args.type == "ocr":engine = PaddleOCR(**(args.__dict__))elif args.type == "structure":engine = PPStructure(**(args.__dict__))else:raise NotImplementedErrorfor img_path in image_file_list:img_name = os.path.basename(img_path).split(".")[0]logger.info("{}{}{}".format("*" * 10, img_path, "*" * 10))if args.type == "ocr":image=cv2.imread(img_path)result = engine.ocr(img_path, det=args.det, rec=args.rec, cls=args.use_angle_cls, bin=args.binarize, inv=args.invert, alpha_color=args.alphacolor, )if result is not None:lines = []for idx in range(len(result)):res = result[idx]for line in res:points = line[0]text = line[1][0]points = np.array(points, dtype=np.int32).reshape((-1, 1, 2))cv2.polylines(image, [points], isClosed=True, color=(255, 0, 0), thickness=2)text_position = (int(points[0][0][0]), int(points[0][0][1] + 20))  # 微调文本位置# cv2.putText(image, '中文文本', (50, 100), cv2.FONT_HERSHEY_SIMPLEX, 2, (255, 255, 255), 3)image = cv2AddChineseText(image, text, text_position, textColor=(0, 255, 0), textSize=30)logger.info(line)val = "["for box in line[0]:val += str(box[0]) + "," + str(box[1]) + ","val = val[:-1]val += "]," + line[1][0] + "," + str(line[1][1]) + "\n"lines.append(val)if args.savefile:if os.path.exists(args.output) is False:os.mkdir(args.output)outfile = args.output + "/" + img_name + ".txt"with open(outfile, "w", encoding="utf-8") as f:f.writelines(lines)elif args.type == "structure":img, flag_gif, flag_pdf = check_and_read(img_path)if not flag_gif and not flag_pdf:img = cv2.imread(img_path)if not flag_pdf:if img is None:logger.error("error in loading image:{}".format(img_path))continueimg_paths = [[img_path, img]]else:img_paths = []for index, pdf_img in enumerate(img):os.makedirs(os.path.join(args.output, img_name), exist_ok=True)pdf_img_path = os.path.join(args.output, img_name, img_name + "_" + str(index) + ".jpg")cv2.imwrite(pdf_img_path, pdf_img)img_paths.append([pdf_img_path, pdf_img])all_res = []for index, (new_img_path, img) in enumerate(img_paths):logger.info("processing {}/{} page:".format(index + 1, len(img_paths)))new_img_name = os.path.basename(new_img_path).split(".")[0]result = engine(img, img_idx=index)save_structure_res(result, args.output, img_name, index)if args.recovery and result != []:from copy import deepcopyfrom ppstructure.recovery.recovery_to_doc import sorted_layout_boxesh, w, _ = img.shaperesult_cp = deepcopy(result)result_sorted = sorted_layout_boxes(result_cp, w)all_res += result_sortedif args.recovery and all_res != []:try:from ppstructure.recovery.recovery_to_doc import convert_info_docxconvert_info_docx(img, all_res, args.output, img_name)except Exception as ex:logger.error("error in layout recovery image:{}, err msg: {}".format(img_name, ex))continuefor item in all_res:item.pop("img")item.pop("res")logger.info(item)logger.info("result save to {}".format(args.output))cv2.imshow('image', image)cv2.waitKey(0)

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/bicheng/11800.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

【Python探索之旅】选择结构(条件语句)

文章目录 条件结构&#xff1a; 1.1 if单分支结构 1.2 if-else 多分支结构 1.3 if-elif 多重结构&#xff1a; 完结撒花​ 前言 Python条件语句是通过一条或多条语句的执行结果&#xff08;True或者False&#xff09;来决定执行的代码块。 Python提供了顺序、选择、循环三…

Git详解之六:Git工具

现在&#xff0c;你已经学习了管理或者维护 Git 仓库&#xff0c;实现代码控制所需的大多数日常命令和工作流程。你已经完成了跟踪和提交文件的基本任务&#xff0c;并且发挥了暂存区和轻量级的特性分支及合并的威力。 接下来你将领略到一些 Git 可以实现的非常强大的功能&…

重学java 37.多线程基本了解

尽管走自己的路&#xff0c;别被那些三言两语击倒 —— 24.5.13 一、多线程_线程和进程 进程&#xff1a;在内存中执行的应用程序 线程:是进程中最小的执行单元线程作用:负责当前进程中程序的运行,一个进程中至少有一个线程,一个进程还可以有多个线程,这…

永嘉原厂8×16点阵数码管驱动抗干扰数码管驱动IC防干扰数显芯片VK1640 SOP28

产品型号&#xff1a;VK1640 产品品牌&#xff1a;永嘉微电/VINKA 封装形式&#xff1a;SOP28 原厂&#xff0c;工程服务&#xff0c;技术支持&#xff01; 概述 VK1640是一种数码管或点阵LED驱动控制专用芯片&#xff0c;内部集成有数据锁存器、LED 驱动等电路。SEG脚接LE…

网络安全快速入门(十二) linux的目录结构

我们前面已经了解了基础命令&#xff0c;今天我们来讲讲linux中的目录结构&#xff0c;我们在了解linux的目录结构之前&#xff0c;我们先与Windows做一个对比 12.1linux和windows的目录结构对比 在之前认识liunx的章节中&#xff0c;我们已经简单说明了linux和window的目录结构…

Java面试八股之String类的常用方法有哪些

Java中String类的常用方法有哪些 获取字符串信息&#xff1a; length()&#xff1a;返回字符串的字符数。 isEmpty()&#xff1a;判断字符串是否为空&#xff08;即长度为0&#xff09;。 访问单个字符&#xff1a; charAt(int index)&#xff1a;返回指定索引处的字符。 …

Mamba:6 线性RNN

若在阅读过程中有些知识点存在盲区&#xff0c;可以回到如何优雅的谈论大模型重新阅读。另外斯坦福2024人工智能报告解读为通识性读物。若对于如果构建生成级别的AI架构则可以关注AI架构设计。技术宅麻烦死磕LLM背后的基础模型。当然最重要的是订阅跟随“鲁班模锤”。 Mamba自从…

简单实现---基于STL的演讲比赛流程管理系统(C++实现)

前言 事先声明&#xff1a;本文章中编写的代码仅用于学习算法思想和编写基础形式使用&#xff0c;并未进行太多的代码优化&#xff0c;因此&#xff0c;若需要对代码进行优化以及异常处理的小伙伴们&#xff0c;可自行添加相关操作&#xff0c;谢谢&#xff01; 一、题…

​​​【收录 Hello 算法】第 6 章 哈希表

目录 第 6 章 哈希表 本章内容 第 6 章 哈希表 Abstract 在计算机世界中&#xff0c;哈希表如同一位聪慧的图书管理员。 他知道如何计算索书号&#xff0c;从而可以快速找到目标图书。 本章内容 6.1 哈希表6.2 哈希冲突6.3 哈希算法6.4 小结

日志:打印技巧

一、概览 Unity日志打印技巧 常规日志打印彩色日志日志存储与上传日志开关日志双击溯源 二、常规日志打印 1、打印Hello World 调用堆栈可以很好的帮助我们定位问题&#xff0c;特别是报错的Error日志 Debug.Log("Hello World");Debug.Log("This is a log m…

【java】代理

什么是代理 假设有一个核心方法叫转账&#xff0c;为了安全性考虑&#xff0c;不能让用户直接访问接口。此时涉及到了代理&#xff0c;这使得用户只能访问代理类&#xff0c;增加了访问限制。 代理的定义&#xff1a;给目标对象提供一个代理对象&#xff0c;并且由代理对象控…

果蔬经营平台|基于SSM+vue的果蔬经营平台系统的设计与实现(源码+数据库+文档)

果蔬经营平台系统 目录 基于SSM&#xff0b;vue的果蔬经营平台系统的设计与实现 一、前言 二、系统设计 三、系统功能设计 1系统功能模块 2管理员功能模块 四、数据库设计 五、核心代码 六、论文参考 七、最新计算机毕设选题推荐 八、源码获取&#xff1a; 博主介…

Python面试50题!面试巩固必看!

题目001: 在Python中如何实现单例模式。 点评&#xff1a;单例模式是指让一个类只能创建出唯一的实例&#xff0c;这个题目在面试中出现的频率极高&#xff0c;因为它考察的不仅仅是单例模式&#xff0c;更是对Python语言到底掌握到何种程度&#xff0c;建议大家用装饰器和元类…

【卫星影像三维重建-全流程代码实现】点云Mesh重构

点云—>Mesh模型 1.介绍1.1 背景1.2 效果示意 2 算法实现2.1 依赖库2.2 实验数据2.3 代码实现2.4 实验效果 3.总结 1.介绍 1.1 背景 &#xff08;1&#xff09;本文主要内容是将三维点云&#xff08;离散的三维点&#xff09;进行表面重建生成Mesh网格&#xff0c;之前有篇…

IP跳变是什么,有什么作用?

IP跳变&#xff0c;简单来说&#xff0c;就是用户在使用网络时&#xff0c;不固定使用一个IP地址&#xff0c;而是定期或根据需求切换到另一个IP地址。这种技术为用户在网络环境中提供了一定的灵活性和安全性。 首先&#xff0c;我们来看看IP跳变的具体实现方式。当用户需要切…

UML快速入门篇

目录 1. UML概述 2. 类的表示 2.1. 类的表示 2.2. 抽象类的表示 2.3. 接口的表示 3. 类的属性&#xff0c;方法&#xff0c;访问权限的表示 3.1. 类的属性 3.2. 类的方法 3.3. 类的权限 4. 类的关联 4.1. 单向关联 4.2. 双向关联 4.3. 自关联 4.4. 类的聚合 4.5.…

Lists.partition用法详解

文章目录 一、引入依赖二、用法三、输出 一、引入依赖 依赖于谷歌的guava 包 <!-- guava --><dependency><groupId>com.google.guava</groupId><artifactId>guava</artifactId><version>28.1-jre</version></dependency&…

仿照el-upload 封装自己的上传控件(el-upload 移动端无法吊起相机)

input选择图片的那个选择在h5的时候在去年下半年突然无法无法出现唤醒相机的选项 不知道出现的原因 发现el-upload作为h5的时候无法吊起相机 又因为需要对服务端地址图片进行回显&#xff08;处于编辑功能的情况下 非新增 新增el-upload 可以实现回显&#xff09; 两个功…

如文所示:

影响 ConnectWise 的 ScreenConnect 远程桌面访问产品的严重漏洞已被广泛利用来传播勒索软件和其他类型的恶意软件。 ConnectWise 于 2 月 19 日通知客户&#xff0c;它已发布针对关键身份验证绕过缺陷和高严重性路径遍历问题的补丁。该安全漏洞当时没有 CVE 标识符。第二天&am…

绘唐3启动器怎么启动一键追爆款3正式版

绘唐3启动器怎么启动一键追爆款3正式版 工具入口 一.文案助手&#xff1a; 【注意&#xff01;&#xff01;】如果图片无显示&#xff0c;一般情况下被杀毒拦截&#xff0c;需关闭杀毒软件或者信任文件路径。 win10设置排除文件&#xff1a; 1.【新建工程】使用前先新建工程…