PiDiNet推理手顺

  1. GitHub链接

https://github.com/hellozhuo/pidinet

  1. 运行环境

      Pyhton 3.8

      filelock==3.14.0

      fsspec==2024.5.0

      imageio==2.34.1

      intel-openmp==2021.4.0

      Jinja2==3.1.4

      lazy_loader==0.4

      MarkupSafe==2.1.5

      mkl==2021.4.0

      mpmath==1.3.0

      networkx==3.1

      numpy==1.24.4

      opencv-python==4.9.0.80

      packaging==24.0

      pillow==10.3.0

      PyWavelets==1.4.1

      scikit-image==0.21.0

      scipy==1.10.1

      sympy==1.12.1

      tbb==2021.12.0

      tifffile==2023.7.10

      torch==2.3.0

      torchvision==0.18.0

      typing_extensions==4.12.0

  1. 准备工作

预训练模型:trained_models文件夹下(工程中的trained_models已经包含了)

  table5:BSDS500 dataset

  • table5_baseline.pth:标准卷积

  • table5_pidinet.pth: 将残差块中的标准卷积替换成PDC

  • table5_pidinet-l.pth: 轻量化的版本,没有CSAM和CDCM两个模块

  • table5_pidinet-small.pth: small版本

  • table5_pidinet-small-l.pth:small版本且没有CSAM和CDCM两个模块

  • table5_pidinet-tiny.pth:tiny版本

  • table5_pidinet-tiny-l.pth:tiny版本且没有CSAM和CDCM两个模块

  table6:NYUD dataset

  • table6_pidinet.pth

  table7:Multicue dataset

  • table7_pidinet.pth

  1. 修改代码

 

if checkpoint is not None: args.start_epoch = checkpoint['epoch'] + 1 # extra add state_dict = checkpoint['state_dict'] # 去除module前缀 state_dict = {k.replace('module.', ''): v for k, v in state_dict.items()} if args.evaluate_converted: # model.load_state_dict(convert_pidinet(checkpoint['state_dict'], args.config)) model.load_state_dict(convert_pidinet(state_dict, args.config)) else: model.load_state_dict(checkpoint['state_dict']) else: raise ValueError('no checkpoint loaded')

原因:使用 torch.nn.DataParallel 包装过的模型,其权重文件在保存时会带有一些前缀,而在尝试将这些权重加载到未包装的或单 GPU 运行的模型中时,这些前缀会导致权重无法正确匹配,所以需要进行处理才能成功加载。

p.s. 具体需不需要修改需要看实际运行的环境

  1. 生成Edge Map命令行

预训练模型:table5_pidinet

有GPU:

 

python main.py --model pidinet_converted --config carv4 --sa --dil -j 4 --gpu 0 --savedir result --datadir custom --dataset Custom --evaluate trained_models/table5_pidinet.pth --evaluate-converted

无GPU:

 

python main.py --model pidinet_converted --config carv4 --sa --dil -j 4 --savedir result --datadir custom --dataset Custom --evaluate trained_models/table5_pidinet.pth --evaluate-converted

其他预训练模型:

table5_pidinet-l

 

python main.py --model pidinet_converted --config carv4 -j 4 --savedir result/table5_pidinet-l --datadir custom --dataset Custom --evaluate trained_models/table5_pidinet-l.pth --evaluate-converted

table5_pidinet-tiny

 

python main.py --model pidinet_tiny_converted --config carv4 --sa --dil -j 4 --savedir result/table5_pidinet-tiny --datadir custom --dataset Custom --evaluate trained_models/table5_pidinet-tiny.pth --evaluate-converted

table5_pidinet-small

 

python main.py --model pidinet_small_converted --config carv4 --sa --dil -j 4 --savedir result/table5_pidinet-small --datadir custom --dataset Custom --evaluate trained_models/table5_pidinet-small.pth --evaluate-converted

  1. 数据集

augmented BSDS500 http://mftp.mmcheng.net/liuyun/rcf/data/HED-BSDS.tar.gz

PASCAL VOC http://mftp.mmcheng.net/liuyun/rcf/data/PASCAL.tar.gz

NYUD http://mftp.mmcheng.net/liuyun/rcf/data/NYUD.tar.gz

  1. 结果图

预训练模型:table5_pidinet

预训练模型:table5_pidinet-tiny

  1. 推理速度

测试集:200张图片

device:CPU

修改代码:main.py test函数

 

def test(test_loader, model, epoch, running_file, args): from PIL import Image import scipy.io as sio model.eval() if args.ablation: img_dir = os.path.join(args.savedir, 'eval_results_val', 'imgs_epoch_%03d' % (epoch - 1)) mat_dir = os.path.join(args.savedir, 'eval_results_val', 'mats_epoch_%03d' % (epoch - 1)) else: img_dir = os.path.join(args.savedir, 'eval_results', 'imgs_epoch_%03d' % (epoch - 1)) mat_dir = os.path.join(args.savedir, 'eval_results', 'mats_epoch_%03d' % (epoch - 1)) eval_info = '\nBegin to eval...\nImg generated in %s\n' % img_dir print(eval_info) running_file.write('\n%s\n%s\n' % (str(args), eval_info)) if not os.path.exists(img_dir): os.makedirs(img_dir) else: print('%s already exits' % img_dir) #return if not os.path.exists(mat_dir): os.makedirs(mat_dir) total_duration = [] for idx, (image, img_name) in enumerate(test_loader): img_name = img_name[0] end = time.perf_counter() with torch.no_grad(): image = image.cuda() if args.use_cuda else image _, _, H, W = image.shape results = model(image) result = torch.squeeze(results[-1]).cpu().numpy() tmp_duration = time.perf_counter() - end total_duration.append(tmp_duration) # print('befor sum: total total_duration', total_duration) results_all = torch.zeros((len(results), 1, H, W)) for i in range(len(results)): results_all[i, 0, :, :] = results[i] torchvision.utils.save_image(1-results_all, os.path.join(img_dir, "%s.jpg" % img_name)) sio.savemat(os.path.join(mat_dir, '%s.mat' % img_name), {'img': result}) result = Image.fromarray((result * 255).astype(np.uint8)) result.save(os.path.join(img_dir, "%s.png" % img_name)) runinfo = "Running test [%d/%d]" % (idx + 1, len(test_loader)) print(runinfo) running_file.write('%s\n' % runinfo) print('befor sum: total total_duration', total_duration) total_duration = np.sum(np.array(total_duration)) print('total total_duration', total_duration) print('total dataloader', len(test_loader)) print("FPS: %f" % (len(test_loader) / total_duration)) running_file.write('\nDone\n')

预训练模型:table5_pidinet

推理时间: 125.43

推理速度:1.602fps

预训练模型:table5_pidinet-l

推理时间: 97.616

推理速度: 2.059fps

预训练模型:table5_pidinet-tiny

推理时间: 43.87

推理速度: 4.581255fps

预训练模型:table5_pidinet-small

推理时间: 44.673

推理速度: 4.499fps

  1. main.py

 

""" (Training, Generating edge maps) Pixel Difference Networks for Efficient Edge Detection (accepted as an ICCV 2021 oral) See paper in https://arxiv.org/abs/2108.07009 Author: Zhuo Su, Wenzhe Liu Date: Aug 22, 2020 """ from __future__ import absolute_import from __future__ import unicode_literals from __future__ import print_function from __future__ import division import argparse import os import time import models from models.convert_pidinet import convert_pidinet from utils import * from edge_dataloader import BSDS_VOCLoader, BSDS_Loader, Multicue_Loader, NYUD_Loader, Custom_Loader from torch.utils.data import DataLoader import torch import torchvision import torch.nn as nn import torch.nn.functional as F import torch.backends.cudnn as cudnn import cv2 parser = argparse.ArgumentParser(description='PyTorch Pixel Difference Convolutional Networks') parser.add_argument('--savedir', type=str, default='results/savedir', help='path to save result and checkpoint') parser.add_argument('--datadir', type=str, default='../data', help='dir to the dataset') parser.add_argument('--only-bsds', action='store_true', help='only use bsds for training') parser.add_argument('--ablation', action='store_true', help='not use bsds val set for training') parser.add_argument('--dataset', type=str, default='BSDS', help='data settings for BSDS, Multicue and NYUD datasets') parser.add_argument('--model', type=str, default='baseline', help='model to train the dataset') parser.add_argument('--sa', action='store_true', help='use CSAM in pidinet') parser.add_argument('--dil', action='store_true', help='use CDCM in pidinet') parser.add_argument('--config', type=str, default='carv4', help='model configurations, please refer to models/config.py for possible configurations') parser.add_argument('--seed', type=int, default=None, help='random seed (default: None)') parser.add_argument('--gpu', type=str, default='', help='gpus available') parser.add_argument('--checkinfo', action='store_true', help='only check the informations about the model: model size, flops') parser.add_argument('--epochs', type=int, default=20, help='number of total epochs to run') parser.add_argument('--iter-size', type=int, default=24, help='number of samples in each iteration') parser.add_argument('--lr', type=float, default=0.005, help='initial learning rate for all weights') parser.add_argument('--lr-type', type=str, default='multistep', help='learning rate strategy [cosine, multistep]') parser.add_argument('--lr-steps', type=str, default=None, help='steps for multistep learning rate') parser.add_argument('--opt', type=str, default='adam', help='optimizer') parser.add_argument('--wd', type=float, default=1e-4, help='weight decay for all weights') parser.add_argument('-j', '--workers', type=int, default=4, help='number of data loading workers') parser.add_argument('--eta', type=float, default=0.3, help='threshold to determine the ground truth (the eta parameter in the paper)') parser.add_argument('--lmbda', type=float, default=1.1, help='weight on negative pixels (the beta parameter in the paper)') parser.add_argument('--resume', action='store_true', help='use latest checkpoint if have any') parser.add_argument('--print-freq', type=int, default=10, help='print frequency') parser.add_argument('--save-freq', type=int, default=1, help='save frequency') parser.add_argument('--evaluate', type=str, default=None, help='full path to checkpoint to be evaluated') parser.add_argument('--evaluate-converted', action='store_true', help='convert the checkpoint to vanilla cnn, then evaluate') args = parser.parse_args() os.environ["CUDA_VISIBLE_DEVICES"] = args.gpu def main(running_file): global args ### Refine args if args.seed is None: args.seed = int(time.time()) torch.manual_seed(args.seed) torch.cuda.manual_seed_all(args.seed) args.use_cuda = torch.cuda.is_available() if args.lr_steps is not None and not isinstance(args.lr_steps, list): args.lr_steps = list(map(int, args.lr_steps.split('-'))) dataset_setting_choices = ['BSDS', 'NYUD-image', 'NYUD-hha', 'Multicue-boundary-1', 'Multicue-boundary-2', 'Multicue-boundary-3', 'Multicue-edge-1', 'Multicue-edge-2', 'Multicue-edge-3', 'Custom'] if not isinstance(args.dataset, list): assert args.dataset in dataset_setting_choices, 'unrecognized data setting %s, please choose from %s' % (str(args.dataset), str(dataset_setting_choices)) args.dataset = list(args.dataset.strip().split('-')) print(args) ### Create model model = getattr(models, args.model)(args) ### Output its model size, flops and bops if args.checkinfo: count_paramsM = get_model_parm_nums(model) print('Model size: %f MB' % count_paramsM) print('##########Time##########', time.strftime('%Y-%m-%d %H:%M:%S')) return ### Define optimizer conv_weights, bn_weights, relu_weights = model.get_weights() param_groups = [{ 'params': conv_weights, 'weight_decay': args.wd, 'lr': args.lr}, { 'params': bn_weights, 'weight_decay': 0.1 * args.wd, 'lr': args.lr}, { 'params': relu_weights, 'weight_decay': 0.0, 'lr': args.lr }] info = ('conv weights: lr %.6f, wd %.6f' + \ '\tbn weights: lr %.6f, wd %.6f' + \ '\trelu weights: lr %.6f, wd %.6f') % \ (args.lr, args.wd, args.lr, args.wd * 0.1, args.lr, 0.0) print(info) running_file.write('\n%s\n' % info) running_file.flush() if args.opt == 'adam': optimizer = torch.optim.Adam(param_groups, betas=(0.9, 0.99)) elif args.opt == 'sgd': optimizer = torch.optim.SGD(param_groups, momentum=0.9) else: raise TypeError("Please use a correct optimizer in [adam, sgd]") ### Transfer to cuda devices if args.use_cuda: model = torch.nn.DataParallel(model).cuda() print('cuda is used, with %d gpu devices' % torch.cuda.device_count()) else: print('cuda is not used, the running might be slow') #cudnn.benchmark = True ### Load Data if 'BSDS' == args.dataset[0]: if args.only_bsds: train_dataset = BSDS_Loader(root=args.datadir, split="train", threshold=args.eta, ablation=args.ablation) test_dataset = BSDS_Loader(root=args.datadir, split="test", threshold=args.eta) else: train_dataset = BSDS_VOCLoader(root=args.datadir, split="train", threshold=args.eta, ablation=args.ablation) test_dataset = BSDS_VOCLoader(root=args.datadir, split="test", threshold=args.eta) elif 'Multicue' == args.dataset[0]: train_dataset = Multicue_Loader(root=args.datadir, split="train", threshold=args.eta, setting=args.dataset[1:]) test_dataset = Multicue_Loader(root=args.datadir, split="test", threshold=args.eta, setting=args.dataset[1:]) elif 'NYUD' == args.dataset[0]: train_dataset = NYUD_Loader(root=args.datadir, split="train", setting=args.dataset[1:]) test_dataset = NYUD_Loader(root=args.datadir, split="test", setting=args.dataset[1:]) elif 'Custom' == args.dataset[0]: train_dataset = Custom_Loader(root=args.datadir) test_dataset = Custom_Loader(root=args.datadir) else: raise ValueError("unrecognized dataset setting") train_loader = DataLoader( train_dataset, batch_size=1, num_workers=args.workers, shuffle=True) test_loader = DataLoader( test_dataset, batch_size=1, num_workers=args.workers, shuffle=False) ### Create log file log_file = os.path.join(args.savedir, '%s_log.txt' % args.model) args.start_epoch = 0 ### Evaluate directly if required if args.evaluate is not None: checkpoint = load_checkpoint(args, running_file) if checkpoint is not None: args.start_epoch = checkpoint['epoch'] + 1 # extra add state_dict = checkpoint['state_dict'] # 去除module前缀 state_dict = {k.replace('module.', ''): v for k, v in state_dict.items()} if args.evaluate_converted: # model.load_state_dict(convert_pidinet(checkpoint['state_dict'], args.config)) model.load_state_dict(convert_pidinet(state_dict, args.config)) else: model.load_state_dict(checkpoint['state_dict']) else: raise ValueError('no checkpoint loaded') test(test_loader, model, args.start_epoch, running_file, args) print('##########Time########## %s' % (time.strftime('%Y-%m-%d %H:%M:%S'))) return ### Optionally resume from a checkpoint if args.resume: checkpoint = load_checkpoint(args, running_file) if checkpoint is not None: args.start_epoch = checkpoint['epoch'] + 1 model.load_state_dict(checkpoint['state_dict']) optimizer.load_state_dict(checkpoint['optimizer']) ### Train saveID = None for epoch in range(args.start_epoch, args.epochs): # adjust learning rate lr_str = adjust_learning_rate(optimizer, epoch, args) # train tr_avg_loss = train( train_loader, model, optimizer, epoch, running_file, args, lr_str) log = "Epoch %03d/%03d: train-loss %s | lr %s | Time %s\n" % \ (epoch, args.epochs, tr_avg_loss, lr_str, time.strftime('%Y-%m-%d %H:%M:%S')) with open(log_file, 'a') as f: f.write(log) saveID = save_checkpoint({ 'epoch': epoch, 'state_dict': model.state_dict(), 'optimizer': optimizer.state_dict(), }, epoch, args.savedir, saveID, keep_freq=args.save_freq) return def train(train_loader, model, optimizer, epoch, running_file, args, running_lr): batch_time = AverageMeter() data_time = AverageMeter() losses = AverageMeter() ## Switch to train mode model.train() running_file.write('\n%s\n' % str(args)) running_file.flush() wD = len(str(len(train_loader)//args.iter_size)) wE = len(str(args.epochs)) end = time.time() iter_step = 0 counter = 0 loss_value = 0 optimizer.zero_grad() for i, (image, label) in enumerate(train_loader): ## Measure data loading time data_time.update(time.time() - end) if args.use_cuda: image = image.cuda(non_blocking=True) label = label.cuda(non_blocking=True) ## Compute output outputs = model(image) if not isinstance(outputs, list): loss = cross_entropy_loss_RCF(outputs, label, args.lmbda) else: loss = 0 for o in outputs: loss += cross_entropy_loss_RCF(o, label, args.lmbda) counter += 1 loss_value += loss.item() loss = loss / args.iter_size loss.backward() if counter == args.iter_size: optimizer.step() optimizer.zero_grad() counter = 0 iter_step += 1 # record loss losses.update(loss_value, args.iter_size) batch_time.update(time.time() - end) end = time.time() loss_value = 0 # display and logging if iter_step % args.print_freq == 1: runinfo = str(('Epoch: [{0:0%dd}/{1:0%dd}][{2:0%dd}/{3:0%dd}]\t' \ % (wE, wE, wD, wD) + \ 'Time {batch_time.val:.3f}\t' + \ 'Data {data_time.val:.3f}\t' + \ 'Loss {loss.val:.4f} (avg:{loss.avg:.4f})\t' + \ 'lr {lr}\t').format( epoch, args.epochs, iter_step, len(train_loader)//args.iter_size, batch_time=batch_time, data_time=data_time, loss=losses, lr=running_lr)) print(runinfo) running_file.write('%s\n' % runinfo) running_file.flush() str_loss = '%.4f' % (losses.avg) return str_loss def test(test_loader, model, epoch, running_file, args): from PIL import Image import scipy.io as sio model.eval() if args.ablation: img_dir = os.path.join(args.savedir, 'eval_results_val', 'imgs_epoch_%03d' % (epoch - 1)) mat_dir = os.path.join(args.savedir, 'eval_results_val', 'mats_epoch_%03d' % (epoch - 1)) else: img_dir = os.path.join(args.savedir, 'eval_results', 'imgs_epoch_%03d' % (epoch - 1)) mat_dir = os.path.join(args.savedir, 'eval_results', 'mats_epoch_%03d' % (epoch - 1)) eval_info = '\nBegin to eval...\nImg generated in %s\n' % img_dir print(eval_info) running_file.write('\n%s\n%s\n' % (str(args), eval_info)) if not os.path.exists(img_dir): os.makedirs(img_dir) else: print('%s already exits' % img_dir) #return if not os.path.exists(mat_dir): os.makedirs(mat_dir) for idx, (image, img_name) in enumerate(test_loader): img_name = img_name[0] with torch.no_grad(): image = image.cuda() if args.use_cuda else image _, _, H, W = image.shape results = model(image) result = torch.squeeze(results[-1]).cpu().numpy() results_all = torch.zeros((len(results), 1, H, W)) for i in range(len(results)): results_all[i, 0, :, :] = results[i] torchvision.utils.save_image(1-results_all, os.path.join(img_dir, "%s.jpg" % img_name)) sio.savemat(os.path.join(mat_dir, '%s.mat' % img_name), {'img': result}) result = Image.fromarray((result * 255).astype(np.uint8)) result.save(os.path.join(img_dir, "%s.png" % img_name)) runinfo = "Running test [%d/%d]" % (idx + 1, len(test_loader)) print(runinfo) running_file.write('%s\n' % runinfo) running_file.write('\nDone\n') def multiscale_test(test_loader, model, epoch, running_file, args): from PIL import Image import scipy.io as sio model.eval() if args.ablation: img_dir = os.path.join(args.savedir, 'eval_results_val', 'imgs_epoch_%03d_ms' % (epoch - 1)) mat_dir = os.path.join(args.savedir, 'eval_results_val', 'mats_epoch_%03d_ms' % (epoch - 1)) else: img_dir = os.path.join(args.savedir, 'eval_results', 'imgs_epoch_%03d_ms' % (epoch - 1)) mat_dir = os.path.join(args.savedir, 'eval_results', 'mats_epoch_%03d_ms' % (epoch - 1)) eval_info = '\nBegin to eval...\nImg generated in %s\n' % img_dir print(eval_info) running_file.write('\n%s\n%s\n' % (str(args), eval_info)) if not os.path.exists(img_dir): os.makedirs(img_dir) else: print('%s already exits' % img_dir) return if not os.path.exists(mat_dir): os.makedirs(mat_dir) for idx, (image, img_name) in enumerate(test_loader): img_name = img_name[0] image = image[0] image_in = image.numpy().transpose((1,2,0)) scale = [0.5, 1, 1.5] _, H, W = image.shape multi_fuse = np.zeros((H, W), np.float32) with torch.no_grad(): for k in range(0, len(scale)): im_ = cv2.resize(image_in, None, fx=scale[k], fy=scale[k], interpolation=cv2.INTER_LINEAR) im_ = im_.transpose((2,0,1)) results = model(torch.unsqueeze(torch.from_numpy(im_).cuda(), 0)) result = torch.squeeze(results[-1].detach()).cpu().numpy() fuse = cv2.resize(result, (W, H), interpolation=cv2.INTER_LINEAR) multi_fuse += fuse multi_fuse = multi_fuse / len(scale) sio.savemat(os.path.join(mat_dir, '%s.mat' % img_name), {'img': multi_fuse}) result = Image.fromarray((multi_fuse * 255).astype(np.uint8)) result.save(os.path.join(img_dir, "%s.png" % img_name)) runinfo = "Running test [%d/%d]" % (idx + 1, len(test_loader)) print(runinfo) running_file.write('%s\n' % runinfo) running_file.write('\nDone\n') if __name__ == '__main__': os.makedirs(args.savedir, exist_ok=True) running_file = os.path.join(args.savedir, '%s_running-%s.txt' \ % (args.model, time.strftime('%Y-%m-%d-%H-%M-%S'))) with open(running_file, 'w') as f: main(f) print('done')

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/pingmian/28655.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

小程序外卖开发中的关键技术与实现方法

小程序外卖服务凭借其便捷性和灵活性,正成为现代餐饮行业的重要组成部分。开发一个功能完善的小程序外卖系统,需要掌握一系列关键技术和实现方法。本文将介绍小程序外卖开发中的核心技术,并提供具体的代码示例,帮助开发者理解和实…

认识异常

一、异常的概念和体系结构 1、异常的概念 在Java中,将程序执行过程中发生的不正常行为称为异常,常见的有算术异常,数组越界异常,空指针异常 2、异常的体系结构 从上图中可以看到: 1.Throwable:是异常体系…

Euro Efficiency(POJ, Open judge)

题目链接: 1252 -- Euro Efficiency 题目描述: 思路: 题面的大概意思就是给你一组基本面值的钱币,问你要凑出指定的面值最少需要多少个钱币的参与,钱币的参与可以是加法也可以是减法。 分析一下,由于答案与钱币参与的顺序无关,…

机器学习-课程整理及初步介绍

简介: 机器学习是人工智能的一个分支,它使计算机系统能够从经验中学习并改进其在特定任务上的表现,而无需进行明确的编程。机器学习涉及多种算法和统计模型,它们可以从数据中学习规律,并做出预测或决策。机器学习的应用非常广泛&…

LabVIEW的热门应用

LabVIEW是一种图形化编程语言,因其易用性和强大的功能,在多个行业和领域中广泛应用。介绍LabVIEW在以下五个热门应用领域中的使用情况,:工业自动化、医疗设备与生物医学工程、科学研究与实验室自动化、能源管理与智能电网、航空航…

[Shell编程学习路线]——if条件语句(单,双,多分支结构)详细语法介绍

🏡作者主页:点击! 🛠️Shell编程专栏:点击! ⏰️创作时间:2024年6月17日7点50分 🀄️文章质量:95分 文章目录 ————前言———— 💯趣站&#x1f4af…

C语言王国——深入自定义类型(结构体)

目录 一、引言 二、结构体 1. 结构体类型的声明 2. 结构体变量的创建和初始化 2.1 创建 2.2 初始化 2.3 typedef 2.4 特殊声明 2.5 自引用 3. 结构成员访问操作符 4. 结构体内存对齐 4.1 对齐规则 4.2 offsetof 4.3 为什么存在内存对齐 5. 结构体传参 6. 结构体实现…

RTOS实时操作系统

常见的RTOS有: VxWorks:广泛应用于工业、医疗、通信和航空航天领域。FreeRTOS:一个开源的RTOS,广泛用于嵌入式设备。uc/OS:一个适用于教育和小型商业项目的RTOS。QNX:主要应用于汽车和工业自动化领域。Win…

探索C嘎嘎的奇妙世界:第三关---缺省参数与函数重载

在c语言中,我们常常在对有参函数进行传参,这样的繁琐过程,C祖师爷对此进行了相关改进,多说无益,上干货: 1 缺省参数: 缺省参数是指在声明或定义函数时为函数的形参指定一个默认值(默认参数)。在调用该函数时,如果没有指定实参,则…

Linux常⽤服务器构建-ssh和scp

目录 1.ssh <1>ssh介绍 <2>安装ssh A.安装ssh服务器 B.远程登陆 <3>使⽤ssh连接服务器 2.scp 本地⽂件复制到远程&#xff1a; 本地⽬录复制到远程&#xff1a; 远程⽂件复制到本地&#xff1a; 远程⽬录复制到本地&#xff1a; 1.ssh <1>…

SQLite检索查询结果函数

代码 database.h #include <sqlite3.h> // &#xfffd;&#xfffd;&#xfffd;&#xfffd;SQLite&#xfffd;ӿں&#xfffd;&#xfffd;&#xfffd;#include<stdio.h>// &#xfffd;&#xfffd;&#xfffd;&#xfffd;һ&#xfffd;&#…

【git使用二】gitee远程仓库创建与本地git命令用法

目录 gitee介绍 管理者注册gitee账号 管理者在gitee网站上创建远程仓库 每个开发者安装git与基本配置 1.git的下载和安装 2.配置SSH公钥 3.开发者信息配置 git命令用法 gitee介绍 Gitee&#xff08;又称码云&#xff09;是一个基于Git的代码托管服务&#xff0c;由开源…

netty:promise的简单示例

# 项目代码资源&#xff1a; 可能还在审核中&#xff0c;请等待。。。 https://download.csdn.net/download/chenhz2284/89442495 # 项目代码 【pom.xml】 <dependency><groupId>io.netty</groupId><artifactId>netty-all</artifactId><v…

MyBatis进行模糊查询时SQL语句拼接引起的异常问题

项目场景&#xff1a; CRM项目&#xff0c;本文遇到的问题是在实现根据页面表单中输入条件&#xff0c;在数据库中分页模糊查询数据&#xff0c;并在页面分页显示的功能时&#xff0c;出现的“诡异”bug。 开发环境如下&#xff1a; 操作系统&#xff1a;Windows11 Java&#…

CAN测试工具——BUSMASTER

文章目录 推荐理由一、菜单栏Transmit WindowDiagnostics二、Tools推荐理由 BUSMASTER是一个用于设计,监测,分析与模拟CAN网络的开源的开放式总线PC软件. 1) 可以和十几种常用CAN总线硬件兼容。比如:IXXAT、PEAK、Kvaser、CANcase XL等。 2)免费,开源 https://rbei-etas.g…

乐鑫ESP32相关资料整理

乐鑫科技 Espressif 介绍 乐鑫科技 Espressif AIoT 领域软硬件产品的研发与设计&#xff0c;专注于研发高集成、低功耗、性能卓越、安全稳定、高性价比的无线通信 SoC&#xff0c;现已发布 ESP8266、ESP32、ESP32-S、ESP32-C 和 ESP32-H 系列芯片、模组和开发板。 Espressif Sy…

C++ virtual public(虚继承类)

这个"virtual"有什么作用&#xff1f; 由于C支持多重继承&#xff0c;所以对于一个派生类中有几个直接父类&#xff0c;而几个直接父类中有几个可能分别继承自某一个基类&#xff08;就是父类的父类&#xff09;&#xff0c;这样在构造最终派生类时&#xff0c;会出现…

【Vue3】插槽的使用及其分类

历史小剧场 后来我才明白&#xff0c;造反的宋江&#xff0c;和招安的宋江&#xff0c;始终是同一个人。 为什么要造反&#xff1f; 造反&#xff0c;就是为了招安。 ----《明朝那些事儿》 概念 在日常的项目开发中&#xff0c;当我们在编写一个完整的组件时&#xff0c;不可避…

【动态规划】0-1背包问题

【动态规划】0-1背包问题 题目:现在有四个物品&#xff0c;背包总容量为8&#xff0c;背包最多能装入价值为多少的物品? 我的图解 表格a【i】【j】表示的是容量为j的背包装入前i个物品的最大价值。 拿a【1】【1】来说&#xff0c;它的值就是背包容量为1&#xff0c;只考虑…

【探索Linux命令行】从基础指令到高级管道操作的介绍与实践

目录 man 指令&#xff08;说明&#xff09; 介绍 cp 指令&#xff08;复制&#xff09; ​编辑 mv 指令&#xff08;移动&#xff09; ​编辑 cat 指令&#xff08;类似cout&#xff09; less&#xff08;查找&#xff09; head & tail&#xff08;打印&#xff…