Halcon20.11深度学习目标检测模型

1.前言:.Halcon的深度学习标注工具一直在更新,我下载的20.11版本的Deep Learning Tool已经显示过期,无奈只能下载最新版MVTec Deep Learning Tool 24.05。不过最新版的标注工具做的很人性化,分类,目标检测,分割,异常值检测,OCR全都有,有兴趣的小伙伴可以下载来试试,可以在工具UI界面实现标注,训练加测试,很好用。如果想自己用代码实现训练也可以,先用标注工具把所有的图标注完毕后,导出标注数据集,即可利用此数据集,在halcon代码中训练。
在这里插入图片描述

在这里插入图片描述
2.上干货,深度目标检测训练源码:
模型训练预处理准备阶段************
dev_update_off ()

  • 选择预处理模型.
    Backbone := ‘pretrained_dl_classifier_compact.hdl’
    *检测种类数
    NumClasses := 6
  • Image dimensions of the network. Later, these values are
  • used to rescale the images during preprocessing.
    ImageWidth := 512
    ImageHeight := 320
    ImageNumChannels := 3

*将容量设置为“中等”,这足以完成此任务
*并提供更好的推理和训练速度。与
*“高”,“中”型号的速度是“中”的两倍多,
*同时显示出几乎相同的检测性能。
Capacity := ‘medium’
*

  • 分割数据集的百分比。
    TrainingPercent := 70
    ValidationPercent := 15
  • In order to get a reproducible split we set a random seed.
  • This means that rerunning the script results in the same split of DLDataset.
    SeedRand := 42

  • ** Set input and output paths ***

  • All example data is written to this folder.
    ExampleDataDir := ‘E:/2024-4-10’

  • Path to the image directory.
    HalconImageDir := ExampleDataDir + ‘/images/’

  • Write the initialized DL object detection model to train it in example part 2.
    DLModelFileName := ExampleDataDir + ‘/pretrained_dl_model_detection.hdl’

  • Dataset directory for any outputs written by preprocess_dl_dataset.
    DataDirectory := ExampleDataDir + ‘/dldataset_pcb_’ + ImageWidth + ‘x’ + ImageHeight

  • Store preprocess parameters separately in order to use it e.g. during inference.
    PreprocessParamFileName := DataDirectory + ‘/dl_preprocess_param.hdict’
    *标注数据集路径
    DataDirectPath:=ExampleDataDir + ‘/Pcb.hdict’

  • Output path of the best evaluated model.
    BestModelBaseName := ExampleDataDir + ‘/best_dl_model_detection’

  • Output path for the final trained model.
    FinalModelBaseName := ExampleDataDir + ‘/final_dl_model_detection’
    InitialModelFileName := ExampleDataDir + ‘/pretrained_dl_model_detection.hdl’
    DLDatasetFileName := DataDirectory + ‘/dl_dataset.hdict’


  • ** Read the labeled dataset and split the dataset ***

  • In order to get reproducible results we set a seed here.
    set_system (‘seed_rand’, SeedRand)
  • Create the output directory if it does not exist yet.
    file_exists (ExampleDataDir, FileExists)
    if (not FileExists)
    make_dir (ExampleDataDir)
    endif
  • Read in a DLDataset.
    read_dict (DataDirectPath, [], [], DLDataset)
  • Split the dataset into train/validation and test.
    split_dl_dataset (DLDataset, TrainingPercent, ValidationPercent, [])

  • ** Determine model parameters from data ***

  • Generate model parameters min_level, max_level, anchor_num_subscales,
  • and anchor_aspect_ratios from the dataset in order to improve the
  • training result. Please note that optimizing the model parameters too
  • much on the training data can lead to overfitting. Hence, this should
  • only be done if the actual application data are similar to the training
  • data.
    create_dict (GenParam)
    set_dict_tuple (GenParam, ‘split’, ‘train’)

determine_dl_model_detection_param (DLDataset, ImageWidth, ImageHeight, GenParam, DLDetectionModelParam)
*

  • Get the generated model parameters.
    get_dict_tuple (DLDetectionModelParam, ‘min_level’, MinLevel)
    get_dict_tuple (DLDetectionModelParam, ‘max_level’, MaxLevel)
    get_dict_tuple (DLDetectionModelParam, ‘anchor_num_subscales’, AnchorNumSubscales)
    get_dict_tuple (DLDetectionModelParam, ‘anchor_aspect_ratios’, AnchorAspectRatios)

  • ** Create the object detection model ***

  • Create dictionary for generic parameters and create the object detection model.
    create_dict (DLModelDetectionParam)
    set_dict_tuple (DLModelDetectionParam, ‘image_width’, ImageWidth)
    set_dict_tuple (DLModelDetectionParam, ‘image_height’, ImageHeight)
    set_dict_tuple (DLModelDetectionParam, ‘image_num_channels’, ImageNumChannels)
    set_dict_tuple (DLModelDetectionParam, ‘min_level’, MinLevel)
    set_dict_tuple (DLModelDetectionParam, ‘max_level’, MaxLevel)
    set_dict_tuple (DLModelDetectionParam, ‘anchor_num_subscales’, AnchorNumSubscales)
    set_dict_tuple (DLModelDetectionParam, ‘anchor_aspect_ratios’, AnchorAspectRatios)
    set_dict_tuple (DLModelDetectionParam, ‘capacity’, Capacity)
  • Get class IDs from dataset for the model.
    get_dict_tuple (DLDataset, ‘class_ids’, ClassIDs)
    set_dict_tuple (DLModelDetectionParam, ‘class_ids’, ClassIDs)
  • Create the model.
    create_dl_model_detection (Backbone, NumClasses, DLModelDetectionParam, DLModelHandle)
  • Write the initialized DL object detection model
  • to train it later in part 2.
    write_dl_model (DLModelHandle, DLModelFileName)

  • ** Preprocess the dataset ***

  • Get preprocessing parameters from model.
    create_dl_preprocess_param_from_model (DLModelHandle, ‘none’, ‘full_domain’, [], [], [], DLPreprocessParam)
  • Preprocess the dataset. This might take a few minutes.
    create_dict (GenParam)
    set_dict_tuple (GenParam, ‘overwrite_files’, true)
    preprocess_dl_dataset (DLDataset, DataDirectory, DLPreprocessParam, GenParam, DLDatasetFilename)
  • Write preprocessing parameters to use them in later parts.
    write_dict (DLPreprocessParam, PreprocessParamFileName, [], [])

  • ** Preview the preprocessed dataset ***

  • Before moving on to training, it is recommended to check the preprocessed dataset.
  • Display the DLSamples for 10 randomly selected train images.
    get_dict_tuple (DLDataset, ‘samples’, DatasetSamples)
    find_dl_samples (DatasetSamples, ‘split’, ‘train’, ‘match’, SampleIndices)
    tuple_shuffle (SampleIndices, ShuffledIndices)
    *训练的数据种类是6,所以下面算子设置[0:5]
    read_dl_samples (DLDataset, ShuffledIndices[0:5], DLSampleBatchDisplay)
  • Set parameters for dev_display_dl_data.
    create_dict (WindowHandleDict)
    create_dict (GenParam)
    set_dict_tuple (GenParam, ‘scale_windows’, 1.2)
  • Display the samples in DLSampleBatchDisplay.
    for Index := 0 to |DLSampleBatchDisplay| - 1 by 1
    *
    • Loop over samples in DLSampleBatchDisplay.
      dev_display_dl_data (DLSampleBatchDisplay[Index], [], DLDataset, ‘bbox_ground_truth’, GenParam, WindowHandleDict)
      get_dict_tuple (WindowHandleDict, ‘bbox_ground_truth’, WindowHandles)
    • Add explanatory text.
      dev_set_window (WindowHandles[0])
      get_dict_object (Image, DLSampleBatchDisplay[Index], ‘image’)
      get_image_size (Image, ImageWidth, ImageHeight)
      dev_disp_text ('New image size after preprocessing: ’ + ImageWidth + ’ x ’ + ImageHeight, ‘window’, ‘bottom’, ‘right’, ‘black’, [], [])
    dev_set_window (WindowHandles[1])
    dev_disp_text (‘Press Run (F5) to continue’, ‘window’, ‘bottom’, ‘right’, ‘black’, [], [])
    stop ()
    endfor

模型训练阶段***************************************************
stop()
dev_update_off ()
*

  • Training can be performed on a GPU or CPU.

  • See the respective system requirements in the Installation Guide.

  • If possible a GPU is used in this example.

  • In case you explicitely wish to run this example on the CPU,

  • choose the CPU device instead.
    query_available_dl_devices ([‘runtime’,‘runtime’], [‘gpu’,‘cpu’], DLDeviceHandles)
    if (|DLDeviceHandles| == 0)
    throw (‘No supported device found to continue this example.’)
    endif

  • Due to the filter used in query_available_dl_devices, the first device is a GPU, if available.
    DLDevice := DLDeviceHandles[0]
    get_dl_device_param (DLDevice, ‘type’, DLDeviceType)
    if (DLDeviceType == ‘cpu’)

    • The number of used threads may have an impact
    • on the training duration.
      NumThreadsTraining := 4
      set_system (‘thread_num’, NumThreadsTraining)
      endif

  • *** Set basic parameters. ***

  • The following parameters need to be adapted frequently.
  • Model parameters.
  • Batch size.
    BatchSize := 2
  • Initial learning rate.
    InitialLearningRate := 0.0005
  • Momentum should be high if batch size is small.
    Momentum := 0.99
  • Parameters used by train_dl_model.
  • Number of epochs to train the model.
    NumEpochs := 60
  • Evaluation interval (in epochs) to calculate evaluation measures on the validation split.
    EvaluationIntervalEpochs := 1
  • Change the learning rate in the following epochs, e.g. [15, 30].
  • Set it to [] if the learning rate should not be changed.
    ChangeLearningRateEpochs := 30
  • Change the learning rate to the following values, e.g. InitialLearningRate * [0.1, 0.01].
  • The tuple has to be of the same length as ChangeLearningRateEpochs.
    ChangeLearningRateValues := InitialLearningRate * 0.1

  • *** Set advanced parameters. ***

  • The following parameters might need to be changed in rare cases.

  • Model parameter.

  • Set the weight prior.
    WeightPrior := 0.00001

  • Parameters used by train_dl_model.

  • Control whether training progress is displayed (true/false).
    EnableDisplay := true

  • Set a random seed for training.
    RandomSeed := 42
    set_system (‘seed_rand’, RandomSeed)

  • In order to obtain nearly deterministic training results on the same GPU

  • (system, driver, cuda-version) you could specify “cudnn_deterministic” as

  • “true”. Note, that this could slow down training a bit.

  • set_system (‘cudnn_deterministic’, ‘true’)

  • Set generic parameters of create_dl_train_param.

  • Please see the documentation of create_dl_train_param for an overview on all available parameters.
    GenParamName := []
    GenParamValue := []

  • Augmentation parameter.

  • If samples should be augmented during training, create the dict required by augment_dl_samples.

  • Here, we set the augmentation percentage and method.
    create_dict (AugmentationParam)

  • Percentage of samples to be augmented.
    set_dict_tuple (AugmentationParam, ‘augmentation_percentage’, 50)

  • Mirror images along row and column.
    set_dict_tuple (AugmentationParam, ‘mirror’, ‘rc’)
    GenParamName := [GenParamName,‘augment’]
    GenParamValue := [GenParamValue,AugmentationParam]

  • Change strategies.

  • It is possible to change model parameters during training.

  • Here, we change the learning rate if specified above.
    if (|ChangeLearningRateEpochs| > 0)
    create_dict (ChangeStrategy)

    • Specify the model parameter to be changed, here the learning rate.
      set_dict_tuple (ChangeStrategy, ‘model_param’, ‘learning_rate’)
    • Start the parameter value at ‘initial_value’.
      set_dict_tuple (ChangeStrategy, ‘initial_value’, InitialLearningRate)
    • Reduce the learning rate in the following epochs.
      set_dict_tuple (ChangeStrategy, ‘epochs’, ChangeLearningRateEpochs)
    • Reduce the learning rate to the following value at epoch 30.
      set_dict_tuple (ChangeStrategy, ‘values’, ChangeLearningRateValues)
    • Collect all change strategies as input.
      GenParamName := [GenParamName,‘change’]
      GenParamValue := [GenParamValue,ChangeStrategy]
      endif
  • Serialization strategies.

  • There are several options for saving intermediate models to disk (see create_dl_train_param).

  • Here, the best and final model are saved to the paths set above.
    create_dict (SerializationStrategy)
    set_dict_tuple (SerializationStrategy, ‘type’, ‘best’)
    set_dict_tuple (SerializationStrategy, ‘basename’, BestModelBaseName)
    GenParamName := [GenParamName,‘serialize’]
    GenParamValue := [GenParamValue,SerializationStrategy]
    create_dict (SerializationStrategy)
    set_dict_tuple (SerializationStrategy, ‘type’, ‘final’)
    set_dict_tuple (SerializationStrategy, ‘basename’, FinalModelBaseName)
    GenParamName := [GenParamName,‘serialize’]
    GenParamValue := [GenParamValue,SerializationStrategy]

  • Display parameters.

  • In this example, the evaluation measure for the training spit is not displayed during

  • training (default). If you want to do so, select a certain percentage of the training

  • samples used to evaluate the model during training. A lower percentage helps to speed

  • up the evaluation. If the evaluation measure for the training split shall

  • not be displayed, set SelectedPercentageTrainSamples to 0.
    SelectedPercentageTrainSamples := 0

  • Set the x-axis argument of the training plots.
    XAxisLabel := ‘epochs’
    create_dict (DisplayParam)
    set_dict_tuple (DisplayParam, ‘selected_percentage_train_samples’, SelectedPercentageTrainSamples)
    set_dict_tuple (DisplayParam, ‘x_axis_label’, XAxisLabel)
    GenParamName := [GenParamName,‘display’]
    GenParamValue := [GenParamValue,DisplayParam]


  • *** Read initial model and dataset. ***

  • Check if all necessary files exist.
    check_data_availability (ExampleDataDir, InitialModelFileName, DLDatasetFileName)
  • Read in the model that was initialized during preprocessing.
    read_dl_model (InitialModelFileName, DLModelHandle)
  • Read in the preprocessed DLDataset file.
    read_dict (DLDatasetFileName, [], [], DLDataset)

  • *** Set model parameters. ***

  • Set model hyper-parameters as specified in the settings above.
    set_dl_model_param (DLModelHandle, ‘learning_rate’, InitialLearningRate)
    set_dl_model_param (DLModelHandle, ‘momentum’, Momentum)
    if (BatchSize == ‘maximum’ and DLDeviceType == ‘gpu’)
    set_dl_model_param_max_gpu_batch_size (DLModelHandle, 100)
    else
    set_dl_model_param (DLModelHandle, ‘batch_size’, BatchSize)
    endif
    if (|WeightPrior| > 0)
    set_dl_model_param (DLModelHandle, ‘weight_prior’, WeightPrior)
    endif
  • When the batch size is determined, set the device.
    set_dl_model_param (DLModelHandle, ‘device’, DLDevice)

  • *** Train the model. ***

  • Create training parameters.
    create_dl_train_param (DLModelHandle, NumEpochs, EvaluationIntervalEpochs, EnableDisplay, RandomSeed, GenParamName, GenParamValue, TrainParam)
  • Start the training by calling the training operator
  • train_dl_model_batch () within the following procedure.
    train_dl_model (DLDataset, DLModelHandle, TrainParam, 0.0, TrainResults, TrainInfos, EvaluationInfos)
  • Stop after the training has finished, before closing the windows.
    dev_disp_text (‘Press Run (F5) to continue’, ‘window’, ‘bottom’, ‘right’, ‘black’, [], [])
    stop ()
  • Close training windows.
    dev_close_window ()

模型评估阶段***********************************
dev_update_off ()
*

  • In this example, the evaluation steps are explained in graphics windows,

  • before they are executed. Set the following parameter to false in order to

  • skip this visualization.
    ShowExampleScreens := true

  • By default, this example uses a model pretrained by MVTec. To use the model

  • which was trained in part 2 of this example series, set the following

  • variable to false.
    UsePretrainedModel := true

  • The evaluation can be performed on GPU or CPU.

  • See the respective system requirements in the Installation Guide.

  • If possible a GPU is used in this example.

  • In case you explicitely wish to run this example on the CPU,

  • choose the CPU device instead.
    query_available_dl_devices ([‘runtime’,‘runtime’], [‘gpu’,‘cpu’], DLDeviceHandles)
    if (|DLDeviceHandles| == 0)
    throw (‘No supported device found to continue this example.’)
    endif

  • Due to the filter used in query_available_dl_devices, the first device is a GPU, if available.
    DLDevice := DLDeviceHandles[0]


  • ** Set paths ***

  • Example data folder containing the outputs of the previous example series.
    ExampleDataDir := ‘E:/2024-4-10’
  • File name of the finetuned object detection model.
    RetrainedModelFileName := ExampleDataDir + ‘/best_dl_model_detection.hdl’
  • Path to DL dataset.
  • Note: Adapt DataDirectory after preprocessing with another image size.
    DataDirectory := ExampleDataDir + ‘/dldataset_pcb_512x320’
    DLDatasetFileName := DataDirectory + ‘/dl_dataset.hdict’

  • ** Set evaluation parameters ***

  • Specify measures of interest
    EvaluationMeasures := ‘all’
  • Specify considered IoU thresholds.
    IoUThresholds := []
  • Display detailed results for the following IoU threshold.
    DisplayIoUThreshold := 0.7
  • Batch size used during evaluation.
    BatchSize := 1
  • Specify evaluation subsets for objects of a certain size.
    AreaNames := []
    AreaMin := []
    AreaMax := []
  • Specify the maximum number of detections considered for each measure.
    MaxNumDetections := []

  • ** Read the model and data ***

  • Check if all necessary files exist.
    check_data_availability_COPY_1 (ExampleDataDir, DLDatasetFileName, RetrainedModelFileName, UsePretrainedModel)
  • Read the trained model.
    read_dl_model (RetrainedModelFileName, DLModelHandle)
  • Set batch size of the model to 1 temporarily.
    set_dl_model_param (DLModelHandle, ‘batch_size’, 1)

set_dl_model_param (DLModelHandle, ‘device’, DLDevice)
*

  • Read the evaluation data.
    read_dict (DLDatasetFileName, [], [], DLDataset)

  • ** Set optimized parameters for inference ***

  • To reduce the number of false positives, set lower values for
  • ‘max_overlap’ (default = 0.5) and ‘max_overlap_class_agnostic’
  • (default = 1.0) and a higher confidence threshold (default = 0.5).
    set_dl_model_param (DLModelHandle, ‘max_overlap_class_agnostic’, 0.7)
    set_dl_model_param (DLModelHandle, ‘max_overlap’, 0.2)
    set_dl_model_param (DLModelHandle, ‘min_confidence’, 0.6)

  • ** First impression via visual inspection of results ***

  • Create parameter dictionaries for visualization.
    create_dict (WindowHandleDict)
    create_dict (GenParam)
    set_dict_tuple (GenParam, ‘bbox_display_confidence’, false)
  • Select test images randomly.
    get_dict_tuple (DLDataset, ‘samples’, DatasetSamples)
    find_dl_samples (DatasetSamples, ‘split’, ‘test’, ‘or’, DLSampleIndices)
    tuple_shuffle (DLSampleIndices, DLSampleIndicesShuffled)
  • Apply the model and display results.
    for Index := 0 to 5 by 1
    read_dl_samples (DLDataset, DLSampleIndicesShuffled[Index], DLSampleBatch)
    apply_dl_model (DLModelHandle, DLSampleBatch, [], DLResultBatch)
    dev_display_dl_data (DLSampleBatch, DLResultBatch, DLDataset, ‘bbox_both’, GenParam, WindowHandleDict)
    dev_disp_text (‘Press Run (F5) to continue’, ‘window’, ‘bottom’, ‘right’, ‘black’, [], [])
    stop ()
    endfor

dev_close_window_dict (WindowHandleDict)
set_dl_model_param (DLModelHandle, ‘batch_size’, BatchSize)
*


  • ** Evaluate object detection model on evaluation data ***

  • Set generic evaluation parameters.
    create_dict (GenParamEval)
  • Set the measures of interest.
    set_dict_tuple (GenParamEval, ‘measures’, EvaluationMeasures)
  • Set maximum number of detections considered for each measure.
    if (|MaxNumDetections|)
    set_dict_tuple (GenParamEval, ‘max_num_detections’, MaxNumDetections)
    endif
  • Set the evaluation area subsets.
    if (|AreaNames|)
    if ((|AreaNames| != |AreaMin|) or (|AreaNames| != |AreaMax|))
    throw (‘AreaNames, AreaMin, and AreaMax must have the same size.’)
    endif
    create_dict (AreaRanges)
    set_dict_tuple (AreaRanges, ‘name’, AreaNames)
    set_dict_tuple (AreaRanges, ‘min’, AreaMin)
    set_dict_tuple (AreaRanges, ‘max’, AreaMax)
    set_dict_tuple (GenParamEval, ‘area_ranges’, AreaRanges)
    endif
  • Set IoU thresholds.
    if (|IoUThresholds|)
    set_dict_tuple (GenParamEval, ‘iou_threshold’, IoUThresholds)
    endif
  • Enable detailed evaluation.
    set_dict_tuple (GenParamEval, ‘detailed_evaluation’, true)
  • Show progress of evaluation.
    set_dict_tuple (GenParamEval, ‘show_progress’, true)
  • Evaluate the finetuned model on the ‘test’ split of the dataset.
    evaluate_dl_model (DLDataset, DLModelHandle, ‘split’, ‘test’, GenParamEval, EvaluationResultDetection, EvalParams)
  • Display results of the detailed evaluation.
    create_dict (DisplayParam)
  • Set the IoU of interest. The default is the first ‘iou_threshold’ of EvalParams.
    if (|DisplayIoUThreshold| == 1)
    get_dict_tuple (EvalParams, ‘iou_threshold’, EvalIoUThresholds)
    if (find(EvalIoUThresholds,DisplayIoUThreshold) != -1)
    set_dict_tuple (DisplayParam, ‘iou_threshold’, DisplayIoUThreshold)
    else
    throw (‘No evaluation result for specified IoU threshold.’)
    endif
    endif
  • Display detailed precision and recall
    set_dict_tuple (DisplayParam, ‘display_mode’, [‘pie_charts_precision’,‘pie_charts_recall’])
    create_dict (WindowHandleDict)
    dev_display_detection_detailed_evaluation (EvaluationResultDetection, EvalParams, DisplayParam, WindowHandleDict)
    dev_disp_text (‘Press Run (F5) to continue’, ‘window’, ‘top’, ‘right’, ‘black’, [], [])
    stop ()
    dev_close_window_dict (WindowHandleDict)
  • Display confusion matrix.
    set_dict_tuple (DisplayParam, ‘display_mode’, ‘absolute_confusion_matrix’)
    dev_display_detection_detailed_evaluation (EvaluationResultDetection, EvalParams, DisplayParam, WindowHandleDict)
    dev_disp_text (‘Press Run (F5) to continue’, ‘window’, ‘bottom’, ‘right’, ‘black’, [], [])
    stop ()
    dev_close_window_dict (WindowHandleDict)
  • Optimize the memory consumption.
    set_dl_model_param (DLModelHandle, ‘batch_size’, 1)
    set_dl_model_param (DLModelHandle, ‘optimize_for_inference’, ‘true’)
    write_dl_model (DLModelHandle, RetrainedModelFileName)
  • Close the windows.
    dev_close_window_dict (WindowHandleDict)

模型测试阶段****************************************
dev_update_off ()
*

  • In this example, the inference steps are explained in graphics windows,
  • before they are executed. Set the following parameter to false in order to
  • skip this visualization.
    ShowExampleScreens := true
  • By default, this example uses a model pretrained by MVTec. To use the model
  • which was trained in part 2 of this example series, set the following
  • variable to false.
    UsePretrainedModel := true
  • Inference can be done on a GPU or CPU.
  • See the respective system requirements in the Installation Guide.
  • If possible a GPU is used in this example.
  • In case you explicitely wish to run this example on the CPU,
  • choose the CPU device instead.
    query_available_dl_devices ([‘runtime’,‘runtime’], [‘gpu’,‘cpu’], DLDeviceHandles)
    if (|DLDeviceHandles| == 0)
    throw (‘No supported device found to continue this example.’)
    endif
  • Due to the filter used in query_available_dl_devices, the first device is a GPU, if available.
    DLDevice := DLDeviceHandles[0]

  • ** Set paths and parameters for inference ***

  • We will demonstrate the inference on the example images.

  • In a real application newly incoming images (not used for training or evaluation)

  • would be used here.

  • In this example, we read the images from file.

  • Directory name with the images of the pill bag dataset.
    ExampleDir:=‘E:/2024-4-10’
    ImageDir := ExampleDir + ‘/PcbImgs’

  • File name of the dict containing parameters used for preprocessing.

  • Note: Adapt DataDirectory after preprocessing with another image size.
    DataDirectory := ExampleDir + ‘/dldataset_pcb_512x320’
    PreprocessParamFileName := DataDirectory + ‘/dl_preprocess_param.hdict’

  • File name of the finetuned object detection model.
    RetrainedModelFileName := ExampleDir + ‘/best_dl_model_detection.hdl’

  • Provide the class names and IDs.

  • Class names.
    ClassNames := [‘SR’,‘MR’,‘BR’,‘C’,‘D’,‘U’]

  • Respective class ids.
    ClassIDs := [0,1,2,3,4,5]

  • Batch Size used during inference.
    BatchSizeInference := 1

  • Postprocessing parameters for the detection model.
    MinConfidence := 0.6
    MaxOverlap := 0.2
    MaxOverlapClassAgnostic := 0.7


  • ** Inference ***

  • Check if all necessary files exist.
    check_data_availability_COPY_2 (ExampleDir, PreprocessParamFileName, RetrainedModelFileName, UsePretrainedModel)

  • Read in the retrained model.
    read_dl_model (RetrainedModelFileName, DLModelHandle)

  • Set the batch size.
    set_dl_model_param (DLModelHandle, ‘batch_size’, BatchSizeInference)

  • Initialize the model for inference.
    set_dl_model_param (DLModelHandle, ‘device’, DLDevice)

  • Set postprocessing parameters for model.
    set_dl_model_param (DLModelHandle, ‘min_confidence’, MinConfidence)
    set_dl_model_param (DLModelHandle, ‘max_overlap’, MaxOverlap)
    set_dl_model_param (DLModelHandle, ‘max_overlap_class_agnostic’, MaxOverlapClassAgnostic)

  • Get the parameters used for preprocessing.
    read_dict (PreprocessParamFileName, [], [], DLPreprocessParam)

  • Create window dictionary for displaying results.
    create_dict (WindowHandleDict)

  • Create dictionary with dataset parameters necessary for displaying.
    create_dict (DLDataInfo)
    set_dict_tuple (DLDataInfo, ‘class_names’, ClassNames)
    set_dict_tuple (DLDataInfo, ‘class_ids’, ClassIDs)

  • Set generic parameters for visualization.
    create_dict (GenParam)
    set_dict_tuple (GenParam, ‘scale_windows’, 1.2)

  • Image Acquisition 01: Code generated by Image Acquisition 01
    list_files (‘E:/2024-4-10/PcbImgs’, [‘files’,‘follow_links’], ImageFiles)
    tuple_regexp_select (ImageFiles, [‘\.(tif|tiff|gif|bmp|jpg|jpeg|jp2|png|pcx|pgm|ppm|pbm|xwd|ima|hobj)$’,‘ignore_case’], ImageFiles)
    for Index := 0 to |ImageFiles| - 1 by 1
    read_image (ImageBatch, ImageFiles[Index])

    • Generate the DLSampleBatch.
      gen_dl_samples_from_images (ImageBatch, DLSampleBatch)
    • Preprocess the DLSampleBatch.
      preprocess_dl_samples (DLSampleBatch, DLPreprocessParam)
    • Apply the DL model on the DLSampleBatch.
      apply_dl_model (DLModelHandle, DLSampleBatch, [], DLResultBatch)
    • Postprocessing and visualization.
    • Loop over each sample in the batch.
      for SampleIndex := 0 to BatchSizeInference - 1 by 1
      *
      • Get sample and according results.
        DLSample := DLSampleBatch[SampleIndex]
        DLResult := DLResultBatch[SampleIndex]
      • Count detected pills for each class.
        get_dict_tuple (DLResult, ‘bbox_class_id’, DetectedClassIDs)
        tuple_gen_const (|ClassIDs|, 0, NumberDetectionsPerClass)
        for Index := 0 to |ClassIDs| - 1 by 1
        NumberDetectionsPerClass[Index] := sum(DetectedClassIDs [==] ClassIDs[Index])
        endfor
      • Create output text based on counted pills.
        create_counting_result_text (NumberDetectionsPerClass, ClassNames, Text, TextColor, TextBoxColor)
      • Display results and text.
        dev_display_dl_data (DLSample, DLResult, DLDataInfo, ‘bbox_result’, GenParam, WindowHandleDict)
        get_dict_tuple (WindowHandleDict, ‘bbox_result’, WindowHandles)
        dev_set_window (WindowHandles[0])
        set_display_font (WindowHandles[0], 16, ‘mono’, ‘true’, ‘false’)
        dev_disp_text (Text, ‘window’, ‘top’, ‘left’, TextColor, [‘box_color’,‘shadow’], [TextBoxColor,‘false’])
        dev_disp_text (‘Press Run (F5) to continue’, ‘window’, ‘bottom’, ‘right’, ‘black’, [], [])
        stop ()
        endfor
        endfor
  • Close windows used for visualization.
    dev_close_window_dict (WindowHandleDict)

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/web/50930.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

axios 上传 和下载 excel 文件

axios 上传 和下载 excel 文件 上传 excel 文件 axios 请求配置 import axios from axios// 导入(校验数据) export const postFile (data) > {return axios.post({url: 上传地址,data,headers: {Content-Type: multipart/form-data}}) }调用方法处 …

口语笔记——连词

连词 可以 连接主、谓、宾、表、副、介、补语、定语、状语、同位语(如下面连词and中的示例)。 and I and he went to shanghai. 主语I like basketball and football. 宾语He is kind and friendly. 表语He runs fast and beautifully. 副词I went to …

【Java】—— 使用Java编写程序找出100以内的质数

质数的定义与性质 质数是指只能被1和自身整除的正整数。根据定义,质数必须大于1。例如,2、3、5、7、11等都是质数。质数的性质如下: 每个大于1的自然数要么是质数,要么可以分解成几个质数的乘积。除了2和3之外,所有的…

获取阿里云Docker镜像加速器地址

注册并登录阿里云账号:首先,你需要有一个阿里云账号。如果还没有,可以在阿里云官网注册。 访问容器镜像服务:登录后,进入“产品与服务”,找到“容器服务”或“容器镜像服务”。阿里云容器服务 找到镜像加…

iOS开发进阶(二十二):Xcode* 离线安装 iOS Simulator

文章目录 一、前言二、模拟器安装 一、前言 Xcode 15 安装包的大小相比之前更小,因为除了 macOS 的 Components,其他都需要动态下载安装,否则提示 iOS 17 Simulator Not Installed。 如果不安装对应的运行模拟库,真机和模拟器无法…

【UE】关卡实例基本介绍与使用

目录 一、什么是关卡实例 二、创建关卡实例 三、编辑关卡实例 四、破坏关卡实例 五、创建关卡实例蓝图 一、什么是关卡实例 关卡实例本质上是一个已存在关卡的可重复使用的实例化版本。它基于原始关卡,但可以在运行时进行独立的修改和定制,同时保持…

哪个牌子的开放式耳机性价比高?五款地表最强机型推荐!

在我们的日常生活中,街道、地铁车厢或公交车等地方常常充满了噪音,这些杂音不仅可能扰乱心情,还可能对我们的听力造成潜在的伤害。在这样的环境下,如果想要享受音乐或追剧,同时又能保持对周围环境的警觉,开…

【WebSocket】websocket学习【二】

1.需求:通过websocket实现在线聊天室 2.流程分析 3.消息格式 客户端 --> 服务端 {"toName":"张三","message":"你好"}服务端 --> 客户端 系统消息格式:{"system":true,"fromName"…

全自动内衣洗衣机什么牌子好?五款业界高性能内衣洗衣机推荐

在日常生活中,内衣洗衣机已成为现代家庭必备的重要家电之一。选择一款耐用、质量优秀的内衣洗衣机,不仅可以减少洗衣负担,还能提供高效的洗涤效果。然而,市场上众多内衣洗衣机品牌琳琅满目,让我们往往难以选择。那么&a…

google浏览器chrome用户数据(拓展程序,书签等)丢失问题

一、问题背景 我出现这个情况的问题背景是:因为C盘块满了想清理一部分空间(具体看这:windows -- C盘清理_c盘softwaredistribution-CSDN博客),于是找到了更改AppDatta这个方法,但因为,当时做迁移…

标题:跨越编程学习的高墙:克服挫折感的策略

标题:跨越编程学习的高墙:克服挫折感的策略 在编程的征途上,挫折感常常不期而至,但正是这些挑战塑造了我们解决问题的能力。本文将从心态调整、学习方法和成功经验三个方面,分享如何克服编程学习中的挫折感&#xff0…

SOLID——组合复用原则

组合复用原则 组合复用原则主要思想代码示例分析组合复用原则体现1. 组合而非继承2. 动态功能组合3. 复用已有功能4. 接口隔离5. 单一职责 优势总结 组合复用原则 组合复用原则(Composite Reuse Principle)强调在设计中优先使用对象组合而非类继承&…

文本匹配任务(上)

文本匹配任务 1.文本匹配介绍1.1文本匹配定义1.1.1狭义定义1.1.2广义定义 1.2文本匹配应用1.2.1问答对话1.2.1信息检索 2.文本匹配--智能问答2.1基本思路2.2技术路线分类2.2.1按基础资源划分2.2.2 答案生成方式2.2.3 NLP技术 2.3智能问答-Faq知识库问答2.3.1运行逻辑2.3.2核心关…

平衡日常工作与提升式学习话题有感

文章目录 前言1.工作是什么?2.怎么提升技术?3.工作/学习与生活的平衡总结 前言 这篇博客是针对程序员如何平衡日常编码工作与提升式学习?这个话题进行的个人观点阐述,个人所思所想罢了。 刚毕业没几年,水平有限&#…

QT中鼠标事件示例(包含点击,点击之后移动,释放的坐标获取)

QT中的鼠标事件 简介:结果展示:实例代码: 简介: 在Qt中,处理鼠标事件是图形用户界面(GUI)编程中的一个重要方面。Qt通过一系列的事件处理函数来支持鼠标事件的响应。这些事件包括鼠标点击&…

Python模块与包

Python的模块和包是Python编程中非常重要的概念,它们有助于代码的复用和组织。下面将详细介绍Python的模块和包。 文章目录 一、模块(Module)**定义**:**特点**:导入方式**常用模块示例**:自定义模块_main…

【容器安全系列Ⅲ】- 深入了解Capabilities的作用

在本系列的上一部分中,我们提到 Docker 容器尚未使用 time 命名空间。我们还探讨了容器在许多情况下如何以 root 用户身份运行。考虑到这两点,如果我们尝试更改容器内的日期和时间会发生什么? 为了测试这一点,我们先运行 docker r…

代码随想录算法训练营第三十一天| 01背包问题 二维 01背包问题 一维 416. 分割等和子集

01背包问题 二维 代码随想录 视频讲解&#xff1a;带你学透0-1背包问题&#xff01;| 关于背包问题&#xff0c;你不清楚的地方&#xff0c;这里都讲了&#xff01;| 动态规划经典问题 | 数据结构与算法_哔哩哔哩_bilibili #include <bits/stdc.h> using namespace std;…

除了vim还能怎么编辑文件

除了使用vi编辑器之外&#xff0c;还有多种方法可以在UNIX或类UNIX操作系统中修改文件。以下是一些常见的文本编辑器和命令行工具&#xff1a; nano:nano是一个简单易用的文本编辑器&#xff0c;比vi更容易上手。要使用nano编辑文件&#xff0c;输入&#xff1a; nano filename…

thinkphp8 定时任务 addArgument

在ThinkPHP8中&#xff0c;我们可以使用addArgument方法来添加命令行参数。这个方法允许我们定义命令行参数&#xff0c;并且可以指定参数的模式&#xff08;例如&#xff1a;是否必须&#xff0c;是否可选&#xff09;。 以下是一个简单的例子&#xff0c;演示如何在ThinkPHP…