Halcon20.11深度学习目标检测模型

1.前言:.Halcon的深度学习标注工具一直在更新,我下载的20.11版本的Deep Learning Tool已经显示过期,无奈只能下载最新版MVTec Deep Learning Tool 24.05。不过最新版的标注工具做的很人性化,分类,目标检测,分割,异常值检测,OCR全都有,有兴趣的小伙伴可以下载来试试,可以在工具UI界面实现标注,训练加测试,很好用。如果想自己用代码实现训练也可以,先用标注工具把所有的图标注完毕后,导出标注数据集,即可利用此数据集,在halcon代码中训练。
在这里插入图片描述

在这里插入图片描述
2.上干货,深度目标检测训练源码:
模型训练预处理准备阶段************
dev_update_off ()

  • 选择预处理模型.
    Backbone := ‘pretrained_dl_classifier_compact.hdl’
    *检测种类数
    NumClasses := 6
  • Image dimensions of the network. Later, these values are
  • used to rescale the images during preprocessing.
    ImageWidth := 512
    ImageHeight := 320
    ImageNumChannels := 3

*将容量设置为“中等”,这足以完成此任务
*并提供更好的推理和训练速度。与
*“高”,“中”型号的速度是“中”的两倍多,
*同时显示出几乎相同的检测性能。
Capacity := ‘medium’
*

  • 分割数据集的百分比。
    TrainingPercent := 70
    ValidationPercent := 15
  • In order to get a reproducible split we set a random seed.
  • This means that rerunning the script results in the same split of DLDataset.
    SeedRand := 42

  • ** Set input and output paths ***

  • All example data is written to this folder.
    ExampleDataDir := ‘E:/2024-4-10’

  • Path to the image directory.
    HalconImageDir := ExampleDataDir + ‘/images/’

  • Write the initialized DL object detection model to train it in example part 2.
    DLModelFileName := ExampleDataDir + ‘/pretrained_dl_model_detection.hdl’

  • Dataset directory for any outputs written by preprocess_dl_dataset.
    DataDirectory := ExampleDataDir + ‘/dldataset_pcb_’ + ImageWidth + ‘x’ + ImageHeight

  • Store preprocess parameters separately in order to use it e.g. during inference.
    PreprocessParamFileName := DataDirectory + ‘/dl_preprocess_param.hdict’
    *标注数据集路径
    DataDirectPath:=ExampleDataDir + ‘/Pcb.hdict’

  • Output path of the best evaluated model.
    BestModelBaseName := ExampleDataDir + ‘/best_dl_model_detection’

  • Output path for the final trained model.
    FinalModelBaseName := ExampleDataDir + ‘/final_dl_model_detection’
    InitialModelFileName := ExampleDataDir + ‘/pretrained_dl_model_detection.hdl’
    DLDatasetFileName := DataDirectory + ‘/dl_dataset.hdict’


  • ** Read the labeled dataset and split the dataset ***

  • In order to get reproducible results we set a seed here.
    set_system (‘seed_rand’, SeedRand)
  • Create the output directory if it does not exist yet.
    file_exists (ExampleDataDir, FileExists)
    if (not FileExists)
    make_dir (ExampleDataDir)
    endif
  • Read in a DLDataset.
    read_dict (DataDirectPath, [], [], DLDataset)
  • Split the dataset into train/validation and test.
    split_dl_dataset (DLDataset, TrainingPercent, ValidationPercent, [])

  • ** Determine model parameters from data ***

  • Generate model parameters min_level, max_level, anchor_num_subscales,
  • and anchor_aspect_ratios from the dataset in order to improve the
  • training result. Please note that optimizing the model parameters too
  • much on the training data can lead to overfitting. Hence, this should
  • only be done if the actual application data are similar to the training
  • data.
    create_dict (GenParam)
    set_dict_tuple (GenParam, ‘split’, ‘train’)

determine_dl_model_detection_param (DLDataset, ImageWidth, ImageHeight, GenParam, DLDetectionModelParam)
*

  • Get the generated model parameters.
    get_dict_tuple (DLDetectionModelParam, ‘min_level’, MinLevel)
    get_dict_tuple (DLDetectionModelParam, ‘max_level’, MaxLevel)
    get_dict_tuple (DLDetectionModelParam, ‘anchor_num_subscales’, AnchorNumSubscales)
    get_dict_tuple (DLDetectionModelParam, ‘anchor_aspect_ratios’, AnchorAspectRatios)

  • ** Create the object detection model ***

  • Create dictionary for generic parameters and create the object detection model.
    create_dict (DLModelDetectionParam)
    set_dict_tuple (DLModelDetectionParam, ‘image_width’, ImageWidth)
    set_dict_tuple (DLModelDetectionParam, ‘image_height’, ImageHeight)
    set_dict_tuple (DLModelDetectionParam, ‘image_num_channels’, ImageNumChannels)
    set_dict_tuple (DLModelDetectionParam, ‘min_level’, MinLevel)
    set_dict_tuple (DLModelDetectionParam, ‘max_level’, MaxLevel)
    set_dict_tuple (DLModelDetectionParam, ‘anchor_num_subscales’, AnchorNumSubscales)
    set_dict_tuple (DLModelDetectionParam, ‘anchor_aspect_ratios’, AnchorAspectRatios)
    set_dict_tuple (DLModelDetectionParam, ‘capacity’, Capacity)
  • Get class IDs from dataset for the model.
    get_dict_tuple (DLDataset, ‘class_ids’, ClassIDs)
    set_dict_tuple (DLModelDetectionParam, ‘class_ids’, ClassIDs)
  • Create the model.
    create_dl_model_detection (Backbone, NumClasses, DLModelDetectionParam, DLModelHandle)
  • Write the initialized DL object detection model
  • to train it later in part 2.
    write_dl_model (DLModelHandle, DLModelFileName)

  • ** Preprocess the dataset ***

  • Get preprocessing parameters from model.
    create_dl_preprocess_param_from_model (DLModelHandle, ‘none’, ‘full_domain’, [], [], [], DLPreprocessParam)
  • Preprocess the dataset. This might take a few minutes.
    create_dict (GenParam)
    set_dict_tuple (GenParam, ‘overwrite_files’, true)
    preprocess_dl_dataset (DLDataset, DataDirectory, DLPreprocessParam, GenParam, DLDatasetFilename)
  • Write preprocessing parameters to use them in later parts.
    write_dict (DLPreprocessParam, PreprocessParamFileName, [], [])

  • ** Preview the preprocessed dataset ***

  • Before moving on to training, it is recommended to check the preprocessed dataset.
  • Display the DLSamples for 10 randomly selected train images.
    get_dict_tuple (DLDataset, ‘samples’, DatasetSamples)
    find_dl_samples (DatasetSamples, ‘split’, ‘train’, ‘match’, SampleIndices)
    tuple_shuffle (SampleIndices, ShuffledIndices)
    *训练的数据种类是6,所以下面算子设置[0:5]
    read_dl_samples (DLDataset, ShuffledIndices[0:5], DLSampleBatchDisplay)
  • Set parameters for dev_display_dl_data.
    create_dict (WindowHandleDict)
    create_dict (GenParam)
    set_dict_tuple (GenParam, ‘scale_windows’, 1.2)
  • Display the samples in DLSampleBatchDisplay.
    for Index := 0 to |DLSampleBatchDisplay| - 1 by 1
    *
    • Loop over samples in DLSampleBatchDisplay.
      dev_display_dl_data (DLSampleBatchDisplay[Index], [], DLDataset, ‘bbox_ground_truth’, GenParam, WindowHandleDict)
      get_dict_tuple (WindowHandleDict, ‘bbox_ground_truth’, WindowHandles)
    • Add explanatory text.
      dev_set_window (WindowHandles[0])
      get_dict_object (Image, DLSampleBatchDisplay[Index], ‘image’)
      get_image_size (Image, ImageWidth, ImageHeight)
      dev_disp_text ('New image size after preprocessing: ’ + ImageWidth + ’ x ’ + ImageHeight, ‘window’, ‘bottom’, ‘right’, ‘black’, [], [])
    dev_set_window (WindowHandles[1])
    dev_disp_text (‘Press Run (F5) to continue’, ‘window’, ‘bottom’, ‘right’, ‘black’, [], [])
    stop ()
    endfor

模型训练阶段***************************************************
stop()
dev_update_off ()
*

  • Training can be performed on a GPU or CPU.

  • See the respective system requirements in the Installation Guide.

  • If possible a GPU is used in this example.

  • In case you explicitely wish to run this example on the CPU,

  • choose the CPU device instead.
    query_available_dl_devices ([‘runtime’,‘runtime’], [‘gpu’,‘cpu’], DLDeviceHandles)
    if (|DLDeviceHandles| == 0)
    throw (‘No supported device found to continue this example.’)
    endif

  • Due to the filter used in query_available_dl_devices, the first device is a GPU, if available.
    DLDevice := DLDeviceHandles[0]
    get_dl_device_param (DLDevice, ‘type’, DLDeviceType)
    if (DLDeviceType == ‘cpu’)

    • The number of used threads may have an impact
    • on the training duration.
      NumThreadsTraining := 4
      set_system (‘thread_num’, NumThreadsTraining)
      endif

  • *** Set basic parameters. ***

  • The following parameters need to be adapted frequently.
  • Model parameters.
  • Batch size.
    BatchSize := 2
  • Initial learning rate.
    InitialLearningRate := 0.0005
  • Momentum should be high if batch size is small.
    Momentum := 0.99
  • Parameters used by train_dl_model.
  • Number of epochs to train the model.
    NumEpochs := 60
  • Evaluation interval (in epochs) to calculate evaluation measures on the validation split.
    EvaluationIntervalEpochs := 1
  • Change the learning rate in the following epochs, e.g. [15, 30].
  • Set it to [] if the learning rate should not be changed.
    ChangeLearningRateEpochs := 30
  • Change the learning rate to the following values, e.g. InitialLearningRate * [0.1, 0.01].
  • The tuple has to be of the same length as ChangeLearningRateEpochs.
    ChangeLearningRateValues := InitialLearningRate * 0.1

  • *** Set advanced parameters. ***

  • The following parameters might need to be changed in rare cases.

  • Model parameter.

  • Set the weight prior.
    WeightPrior := 0.00001

  • Parameters used by train_dl_model.

  • Control whether training progress is displayed (true/false).
    EnableDisplay := true

  • Set a random seed for training.
    RandomSeed := 42
    set_system (‘seed_rand’, RandomSeed)

  • In order to obtain nearly deterministic training results on the same GPU

  • (system, driver, cuda-version) you could specify “cudnn_deterministic” as

  • “true”. Note, that this could slow down training a bit.

  • set_system (‘cudnn_deterministic’, ‘true’)

  • Set generic parameters of create_dl_train_param.

  • Please see the documentation of create_dl_train_param for an overview on all available parameters.
    GenParamName := []
    GenParamValue := []

  • Augmentation parameter.

  • If samples should be augmented during training, create the dict required by augment_dl_samples.

  • Here, we set the augmentation percentage and method.
    create_dict (AugmentationParam)

  • Percentage of samples to be augmented.
    set_dict_tuple (AugmentationParam, ‘augmentation_percentage’, 50)

  • Mirror images along row and column.
    set_dict_tuple (AugmentationParam, ‘mirror’, ‘rc’)
    GenParamName := [GenParamName,‘augment’]
    GenParamValue := [GenParamValue,AugmentationParam]

  • Change strategies.

  • It is possible to change model parameters during training.

  • Here, we change the learning rate if specified above.
    if (|ChangeLearningRateEpochs| > 0)
    create_dict (ChangeStrategy)

    • Specify the model parameter to be changed, here the learning rate.
      set_dict_tuple (ChangeStrategy, ‘model_param’, ‘learning_rate’)
    • Start the parameter value at ‘initial_value’.
      set_dict_tuple (ChangeStrategy, ‘initial_value’, InitialLearningRate)
    • Reduce the learning rate in the following epochs.
      set_dict_tuple (ChangeStrategy, ‘epochs’, ChangeLearningRateEpochs)
    • Reduce the learning rate to the following value at epoch 30.
      set_dict_tuple (ChangeStrategy, ‘values’, ChangeLearningRateValues)
    • Collect all change strategies as input.
      GenParamName := [GenParamName,‘change’]
      GenParamValue := [GenParamValue,ChangeStrategy]
      endif
  • Serialization strategies.

  • There are several options for saving intermediate models to disk (see create_dl_train_param).

  • Here, the best and final model are saved to the paths set above.
    create_dict (SerializationStrategy)
    set_dict_tuple (SerializationStrategy, ‘type’, ‘best’)
    set_dict_tuple (SerializationStrategy, ‘basename’, BestModelBaseName)
    GenParamName := [GenParamName,‘serialize’]
    GenParamValue := [GenParamValue,SerializationStrategy]
    create_dict (SerializationStrategy)
    set_dict_tuple (SerializationStrategy, ‘type’, ‘final’)
    set_dict_tuple (SerializationStrategy, ‘basename’, FinalModelBaseName)
    GenParamName := [GenParamName,‘serialize’]
    GenParamValue := [GenParamValue,SerializationStrategy]

  • Display parameters.

  • In this example, the evaluation measure for the training spit is not displayed during

  • training (default). If you want to do so, select a certain percentage of the training

  • samples used to evaluate the model during training. A lower percentage helps to speed

  • up the evaluation. If the evaluation measure for the training split shall

  • not be displayed, set SelectedPercentageTrainSamples to 0.
    SelectedPercentageTrainSamples := 0

  • Set the x-axis argument of the training plots.
    XAxisLabel := ‘epochs’
    create_dict (DisplayParam)
    set_dict_tuple (DisplayParam, ‘selected_percentage_train_samples’, SelectedPercentageTrainSamples)
    set_dict_tuple (DisplayParam, ‘x_axis_label’, XAxisLabel)
    GenParamName := [GenParamName,‘display’]
    GenParamValue := [GenParamValue,DisplayParam]


  • *** Read initial model and dataset. ***

  • Check if all necessary files exist.
    check_data_availability (ExampleDataDir, InitialModelFileName, DLDatasetFileName)
  • Read in the model that was initialized during preprocessing.
    read_dl_model (InitialModelFileName, DLModelHandle)
  • Read in the preprocessed DLDataset file.
    read_dict (DLDatasetFileName, [], [], DLDataset)

  • *** Set model parameters. ***

  • Set model hyper-parameters as specified in the settings above.
    set_dl_model_param (DLModelHandle, ‘learning_rate’, InitialLearningRate)
    set_dl_model_param (DLModelHandle, ‘momentum’, Momentum)
    if (BatchSize == ‘maximum’ and DLDeviceType == ‘gpu’)
    set_dl_model_param_max_gpu_batch_size (DLModelHandle, 100)
    else
    set_dl_model_param (DLModelHandle, ‘batch_size’, BatchSize)
    endif
    if (|WeightPrior| > 0)
    set_dl_model_param (DLModelHandle, ‘weight_prior’, WeightPrior)
    endif
  • When the batch size is determined, set the device.
    set_dl_model_param (DLModelHandle, ‘device’, DLDevice)

  • *** Train the model. ***

  • Create training parameters.
    create_dl_train_param (DLModelHandle, NumEpochs, EvaluationIntervalEpochs, EnableDisplay, RandomSeed, GenParamName, GenParamValue, TrainParam)
  • Start the training by calling the training operator
  • train_dl_model_batch () within the following procedure.
    train_dl_model (DLDataset, DLModelHandle, TrainParam, 0.0, TrainResults, TrainInfos, EvaluationInfos)
  • Stop after the training has finished, before closing the windows.
    dev_disp_text (‘Press Run (F5) to continue’, ‘window’, ‘bottom’, ‘right’, ‘black’, [], [])
    stop ()
  • Close training windows.
    dev_close_window ()

模型评估阶段***********************************
dev_update_off ()
*

  • In this example, the evaluation steps are explained in graphics windows,

  • before they are executed. Set the following parameter to false in order to

  • skip this visualization.
    ShowExampleScreens := true

  • By default, this example uses a model pretrained by MVTec. To use the model

  • which was trained in part 2 of this example series, set the following

  • variable to false.
    UsePretrainedModel := true

  • The evaluation can be performed on GPU or CPU.

  • See the respective system requirements in the Installation Guide.

  • If possible a GPU is used in this example.

  • In case you explicitely wish to run this example on the CPU,

  • choose the CPU device instead.
    query_available_dl_devices ([‘runtime’,‘runtime’], [‘gpu’,‘cpu’], DLDeviceHandles)
    if (|DLDeviceHandles| == 0)
    throw (‘No supported device found to continue this example.’)
    endif

  • Due to the filter used in query_available_dl_devices, the first device is a GPU, if available.
    DLDevice := DLDeviceHandles[0]


  • ** Set paths ***

  • Example data folder containing the outputs of the previous example series.
    ExampleDataDir := ‘E:/2024-4-10’
  • File name of the finetuned object detection model.
    RetrainedModelFileName := ExampleDataDir + ‘/best_dl_model_detection.hdl’
  • Path to DL dataset.
  • Note: Adapt DataDirectory after preprocessing with another image size.
    DataDirectory := ExampleDataDir + ‘/dldataset_pcb_512x320’
    DLDatasetFileName := DataDirectory + ‘/dl_dataset.hdict’

  • ** Set evaluation parameters ***

  • Specify measures of interest
    EvaluationMeasures := ‘all’
  • Specify considered IoU thresholds.
    IoUThresholds := []
  • Display detailed results for the following IoU threshold.
    DisplayIoUThreshold := 0.7
  • Batch size used during evaluation.
    BatchSize := 1
  • Specify evaluation subsets for objects of a certain size.
    AreaNames := []
    AreaMin := []
    AreaMax := []
  • Specify the maximum number of detections considered for each measure.
    MaxNumDetections := []

  • ** Read the model and data ***

  • Check if all necessary files exist.
    check_data_availability_COPY_1 (ExampleDataDir, DLDatasetFileName, RetrainedModelFileName, UsePretrainedModel)
  • Read the trained model.
    read_dl_model (RetrainedModelFileName, DLModelHandle)
  • Set batch size of the model to 1 temporarily.
    set_dl_model_param (DLModelHandle, ‘batch_size’, 1)

set_dl_model_param (DLModelHandle, ‘device’, DLDevice)
*

  • Read the evaluation data.
    read_dict (DLDatasetFileName, [], [], DLDataset)

  • ** Set optimized parameters for inference ***

  • To reduce the number of false positives, set lower values for
  • ‘max_overlap’ (default = 0.5) and ‘max_overlap_class_agnostic’
  • (default = 1.0) and a higher confidence threshold (default = 0.5).
    set_dl_model_param (DLModelHandle, ‘max_overlap_class_agnostic’, 0.7)
    set_dl_model_param (DLModelHandle, ‘max_overlap’, 0.2)
    set_dl_model_param (DLModelHandle, ‘min_confidence’, 0.6)

  • ** First impression via visual inspection of results ***

  • Create parameter dictionaries for visualization.
    create_dict (WindowHandleDict)
    create_dict (GenParam)
    set_dict_tuple (GenParam, ‘bbox_display_confidence’, false)
  • Select test images randomly.
    get_dict_tuple (DLDataset, ‘samples’, DatasetSamples)
    find_dl_samples (DatasetSamples, ‘split’, ‘test’, ‘or’, DLSampleIndices)
    tuple_shuffle (DLSampleIndices, DLSampleIndicesShuffled)
  • Apply the model and display results.
    for Index := 0 to 5 by 1
    read_dl_samples (DLDataset, DLSampleIndicesShuffled[Index], DLSampleBatch)
    apply_dl_model (DLModelHandle, DLSampleBatch, [], DLResultBatch)
    dev_display_dl_data (DLSampleBatch, DLResultBatch, DLDataset, ‘bbox_both’, GenParam, WindowHandleDict)
    dev_disp_text (‘Press Run (F5) to continue’, ‘window’, ‘bottom’, ‘right’, ‘black’, [], [])
    stop ()
    endfor

dev_close_window_dict (WindowHandleDict)
set_dl_model_param (DLModelHandle, ‘batch_size’, BatchSize)
*


  • ** Evaluate object detection model on evaluation data ***

  • Set generic evaluation parameters.
    create_dict (GenParamEval)
  • Set the measures of interest.
    set_dict_tuple (GenParamEval, ‘measures’, EvaluationMeasures)
  • Set maximum number of detections considered for each measure.
    if (|MaxNumDetections|)
    set_dict_tuple (GenParamEval, ‘max_num_detections’, MaxNumDetections)
    endif
  • Set the evaluation area subsets.
    if (|AreaNames|)
    if ((|AreaNames| != |AreaMin|) or (|AreaNames| != |AreaMax|))
    throw (‘AreaNames, AreaMin, and AreaMax must have the same size.’)
    endif
    create_dict (AreaRanges)
    set_dict_tuple (AreaRanges, ‘name’, AreaNames)
    set_dict_tuple (AreaRanges, ‘min’, AreaMin)
    set_dict_tuple (AreaRanges, ‘max’, AreaMax)
    set_dict_tuple (GenParamEval, ‘area_ranges’, AreaRanges)
    endif
  • Set IoU thresholds.
    if (|IoUThresholds|)
    set_dict_tuple (GenParamEval, ‘iou_threshold’, IoUThresholds)
    endif
  • Enable detailed evaluation.
    set_dict_tuple (GenParamEval, ‘detailed_evaluation’, true)
  • Show progress of evaluation.
    set_dict_tuple (GenParamEval, ‘show_progress’, true)
  • Evaluate the finetuned model on the ‘test’ split of the dataset.
    evaluate_dl_model (DLDataset, DLModelHandle, ‘split’, ‘test’, GenParamEval, EvaluationResultDetection, EvalParams)
  • Display results of the detailed evaluation.
    create_dict (DisplayParam)
  • Set the IoU of interest. The default is the first ‘iou_threshold’ of EvalParams.
    if (|DisplayIoUThreshold| == 1)
    get_dict_tuple (EvalParams, ‘iou_threshold’, EvalIoUThresholds)
    if (find(EvalIoUThresholds,DisplayIoUThreshold) != -1)
    set_dict_tuple (DisplayParam, ‘iou_threshold’, DisplayIoUThreshold)
    else
    throw (‘No evaluation result for specified IoU threshold.’)
    endif
    endif
  • Display detailed precision and recall
    set_dict_tuple (DisplayParam, ‘display_mode’, [‘pie_charts_precision’,‘pie_charts_recall’])
    create_dict (WindowHandleDict)
    dev_display_detection_detailed_evaluation (EvaluationResultDetection, EvalParams, DisplayParam, WindowHandleDict)
    dev_disp_text (‘Press Run (F5) to continue’, ‘window’, ‘top’, ‘right’, ‘black’, [], [])
    stop ()
    dev_close_window_dict (WindowHandleDict)
  • Display confusion matrix.
    set_dict_tuple (DisplayParam, ‘display_mode’, ‘absolute_confusion_matrix’)
    dev_display_detection_detailed_evaluation (EvaluationResultDetection, EvalParams, DisplayParam, WindowHandleDict)
    dev_disp_text (‘Press Run (F5) to continue’, ‘window’, ‘bottom’, ‘right’, ‘black’, [], [])
    stop ()
    dev_close_window_dict (WindowHandleDict)
  • Optimize the memory consumption.
    set_dl_model_param (DLModelHandle, ‘batch_size’, 1)
    set_dl_model_param (DLModelHandle, ‘optimize_for_inference’, ‘true’)
    write_dl_model (DLModelHandle, RetrainedModelFileName)
  • Close the windows.
    dev_close_window_dict (WindowHandleDict)

模型测试阶段****************************************
dev_update_off ()
*

  • In this example, the inference steps are explained in graphics windows,
  • before they are executed. Set the following parameter to false in order to
  • skip this visualization.
    ShowExampleScreens := true
  • By default, this example uses a model pretrained by MVTec. To use the model
  • which was trained in part 2 of this example series, set the following
  • variable to false.
    UsePretrainedModel := true
  • Inference can be done on a GPU or CPU.
  • See the respective system requirements in the Installation Guide.
  • If possible a GPU is used in this example.
  • In case you explicitely wish to run this example on the CPU,
  • choose the CPU device instead.
    query_available_dl_devices ([‘runtime’,‘runtime’], [‘gpu’,‘cpu’], DLDeviceHandles)
    if (|DLDeviceHandles| == 0)
    throw (‘No supported device found to continue this example.’)
    endif
  • Due to the filter used in query_available_dl_devices, the first device is a GPU, if available.
    DLDevice := DLDeviceHandles[0]

  • ** Set paths and parameters for inference ***

  • We will demonstrate the inference on the example images.

  • In a real application newly incoming images (not used for training or evaluation)

  • would be used here.

  • In this example, we read the images from file.

  • Directory name with the images of the pill bag dataset.
    ExampleDir:=‘E:/2024-4-10’
    ImageDir := ExampleDir + ‘/PcbImgs’

  • File name of the dict containing parameters used for preprocessing.

  • Note: Adapt DataDirectory after preprocessing with another image size.
    DataDirectory := ExampleDir + ‘/dldataset_pcb_512x320’
    PreprocessParamFileName := DataDirectory + ‘/dl_preprocess_param.hdict’

  • File name of the finetuned object detection model.
    RetrainedModelFileName := ExampleDir + ‘/best_dl_model_detection.hdl’

  • Provide the class names and IDs.

  • Class names.
    ClassNames := [‘SR’,‘MR’,‘BR’,‘C’,‘D’,‘U’]

  • Respective class ids.
    ClassIDs := [0,1,2,3,4,5]

  • Batch Size used during inference.
    BatchSizeInference := 1

  • Postprocessing parameters for the detection model.
    MinConfidence := 0.6
    MaxOverlap := 0.2
    MaxOverlapClassAgnostic := 0.7


  • ** Inference ***

  • Check if all necessary files exist.
    check_data_availability_COPY_2 (ExampleDir, PreprocessParamFileName, RetrainedModelFileName, UsePretrainedModel)

  • Read in the retrained model.
    read_dl_model (RetrainedModelFileName, DLModelHandle)

  • Set the batch size.
    set_dl_model_param (DLModelHandle, ‘batch_size’, BatchSizeInference)

  • Initialize the model for inference.
    set_dl_model_param (DLModelHandle, ‘device’, DLDevice)

  • Set postprocessing parameters for model.
    set_dl_model_param (DLModelHandle, ‘min_confidence’, MinConfidence)
    set_dl_model_param (DLModelHandle, ‘max_overlap’, MaxOverlap)
    set_dl_model_param (DLModelHandle, ‘max_overlap_class_agnostic’, MaxOverlapClassAgnostic)

  • Get the parameters used for preprocessing.
    read_dict (PreprocessParamFileName, [], [], DLPreprocessParam)

  • Create window dictionary for displaying results.
    create_dict (WindowHandleDict)

  • Create dictionary with dataset parameters necessary for displaying.
    create_dict (DLDataInfo)
    set_dict_tuple (DLDataInfo, ‘class_names’, ClassNames)
    set_dict_tuple (DLDataInfo, ‘class_ids’, ClassIDs)

  • Set generic parameters for visualization.
    create_dict (GenParam)
    set_dict_tuple (GenParam, ‘scale_windows’, 1.2)

  • Image Acquisition 01: Code generated by Image Acquisition 01
    list_files (‘E:/2024-4-10/PcbImgs’, [‘files’,‘follow_links’], ImageFiles)
    tuple_regexp_select (ImageFiles, [‘\.(tif|tiff|gif|bmp|jpg|jpeg|jp2|png|pcx|pgm|ppm|pbm|xwd|ima|hobj)$’,‘ignore_case’], ImageFiles)
    for Index := 0 to |ImageFiles| - 1 by 1
    read_image (ImageBatch, ImageFiles[Index])

    • Generate the DLSampleBatch.
      gen_dl_samples_from_images (ImageBatch, DLSampleBatch)
    • Preprocess the DLSampleBatch.
      preprocess_dl_samples (DLSampleBatch, DLPreprocessParam)
    • Apply the DL model on the DLSampleBatch.
      apply_dl_model (DLModelHandle, DLSampleBatch, [], DLResultBatch)
    • Postprocessing and visualization.
    • Loop over each sample in the batch.
      for SampleIndex := 0 to BatchSizeInference - 1 by 1
      *
      • Get sample and according results.
        DLSample := DLSampleBatch[SampleIndex]
        DLResult := DLResultBatch[SampleIndex]
      • Count detected pills for each class.
        get_dict_tuple (DLResult, ‘bbox_class_id’, DetectedClassIDs)
        tuple_gen_const (|ClassIDs|, 0, NumberDetectionsPerClass)
        for Index := 0 to |ClassIDs| - 1 by 1
        NumberDetectionsPerClass[Index] := sum(DetectedClassIDs [==] ClassIDs[Index])
        endfor
      • Create output text based on counted pills.
        create_counting_result_text (NumberDetectionsPerClass, ClassNames, Text, TextColor, TextBoxColor)
      • Display results and text.
        dev_display_dl_data (DLSample, DLResult, DLDataInfo, ‘bbox_result’, GenParam, WindowHandleDict)
        get_dict_tuple (WindowHandleDict, ‘bbox_result’, WindowHandles)
        dev_set_window (WindowHandles[0])
        set_display_font (WindowHandles[0], 16, ‘mono’, ‘true’, ‘false’)
        dev_disp_text (Text, ‘window’, ‘top’, ‘left’, TextColor, [‘box_color’,‘shadow’], [TextBoxColor,‘false’])
        dev_disp_text (‘Press Run (F5) to continue’, ‘window’, ‘bottom’, ‘right’, ‘black’, [], [])
        stop ()
        endfor
        endfor
  • Close windows used for visualization.
    dev_close_window_dict (WindowHandleDict)

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/web/50930.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

获取阿里云Docker镜像加速器地址

注册并登录阿里云账号:首先,你需要有一个阿里云账号。如果还没有,可以在阿里云官网注册。 访问容器镜像服务:登录后,进入“产品与服务”,找到“容器服务”或“容器镜像服务”。阿里云容器服务 找到镜像加…

iOS开发进阶(二十二):Xcode* 离线安装 iOS Simulator

文章目录 一、前言二、模拟器安装 一、前言 Xcode 15 安装包的大小相比之前更小,因为除了 macOS 的 Components,其他都需要动态下载安装,否则提示 iOS 17 Simulator Not Installed。 如果不安装对应的运行模拟库,真机和模拟器无法…

【UE】关卡实例基本介绍与使用

目录 一、什么是关卡实例 二、创建关卡实例 三、编辑关卡实例 四、破坏关卡实例 五、创建关卡实例蓝图 一、什么是关卡实例 关卡实例本质上是一个已存在关卡的可重复使用的实例化版本。它基于原始关卡,但可以在运行时进行独立的修改和定制,同时保持…

哪个牌子的开放式耳机性价比高?五款地表最强机型推荐!

在我们的日常生活中,街道、地铁车厢或公交车等地方常常充满了噪音,这些杂音不仅可能扰乱心情,还可能对我们的听力造成潜在的伤害。在这样的环境下,如果想要享受音乐或追剧,同时又能保持对周围环境的警觉,开…

【WebSocket】websocket学习【二】

1.需求:通过websocket实现在线聊天室 2.流程分析 3.消息格式 客户端 --> 服务端 {"toName":"张三","message":"你好"}服务端 --> 客户端 系统消息格式:{"system":true,"fromName"…

全自动内衣洗衣机什么牌子好?五款业界高性能内衣洗衣机推荐

在日常生活中,内衣洗衣机已成为现代家庭必备的重要家电之一。选择一款耐用、质量优秀的内衣洗衣机,不仅可以减少洗衣负担,还能提供高效的洗涤效果。然而,市场上众多内衣洗衣机品牌琳琅满目,让我们往往难以选择。那么&a…

google浏览器chrome用户数据(拓展程序,书签等)丢失问题

一、问题背景 我出现这个情况的问题背景是:因为C盘块满了想清理一部分空间(具体看这:windows -- C盘清理_c盘softwaredistribution-CSDN博客),于是找到了更改AppDatta这个方法,但因为,当时做迁移…

文本匹配任务(上)

文本匹配任务 1.文本匹配介绍1.1文本匹配定义1.1.1狭义定义1.1.2广义定义 1.2文本匹配应用1.2.1问答对话1.2.1信息检索 2.文本匹配--智能问答2.1基本思路2.2技术路线分类2.2.1按基础资源划分2.2.2 答案生成方式2.2.3 NLP技术 2.3智能问答-Faq知识库问答2.3.1运行逻辑2.3.2核心关…

QT中鼠标事件示例(包含点击,点击之后移动,释放的坐标获取)

QT中的鼠标事件 简介:结果展示:实例代码: 简介: 在Qt中,处理鼠标事件是图形用户界面(GUI)编程中的一个重要方面。Qt通过一系列的事件处理函数来支持鼠标事件的响应。这些事件包括鼠标点击&…

【容器安全系列Ⅲ】- 深入了解Capabilities的作用

在本系列的上一部分中,我们提到 Docker 容器尚未使用 time 命名空间。我们还探讨了容器在许多情况下如何以 root 用户身份运行。考虑到这两点,如果我们尝试更改容器内的日期和时间会发生什么? 为了测试这一点,我们先运行 docker r…

Golang | Leetcode Golang题解之第338题比特位计数

题目&#xff1a; 题解&#xff1a; func countBits(n int) []int {bits : make([]int, n1)for i : 1; i < n; i {bits[i] bits[i&(i-1)] 1}return bits }

Excel数字中间指定位置插入符号——以120120加*为例

设置单元格格式——自定义 更多阅读Excel数字中间指定位置插入符号_哔哩哔哩_bilibili

【Linux】2.Linux常见指令以及权限理解(1)

文章目录 1.Xshell的一些快捷键操作2.Linux指令2.1常用指令示例2.2常用指令选项2.2.1 ls指令2.2.2 cd/pwd/shoami指令2.2.3 touch指令2.2.4 mkdir指令2.2.5 rmdir指令2.2.6 rm指令 1.Xshell的一些快捷键操作 Xshell&#xff1a; altenter&#xff1a;Xshell自动全屏&#xff0c…

计算机Java项目|基于SpringBoot的精简博客系统的设计与实现

作者主页&#xff1a;编程指南针 作者简介&#xff1a;Java领域优质创作者、CSDN博客专家 、CSDN内容合伙人、掘金特邀作者、阿里云博客专家、51CTO特邀作者、多年架构师设计经验、多年校企合作经验&#xff0c;被多个学校常年聘为校外企业导师&#xff0c;指导学生毕业设计并参…

【目标检测】CFINet:通过由粗到精的区域提议网络和模仿学习进行小目标检测

《Small Object Detection via Coarse-to-fine Proposal Generation and Imitation Learning》 通过由粗到精的区域提议网络和模仿学习进行小目标检测 期刊&#xff1a;ICCV 2023 原文&#xff1a;https://arxiv.org/abs/2308.09534 源码&#xff1a;https://github.com/shauny…

Linux之HTTP服务器的构建

欢迎诸位来阅读在下的博文~ 在这里&#xff0c;在下会不定期发表一些浅薄的知识和经验&#xff0c;望诸位能与在下多多交流&#xff0c;共同努力! 江山如画&#xff0c;客心如若&#xff0c;欢迎到访&#xff0c;一展风采 文章目录 参考环境参考书籍一、HTTP的工作原理1. 建立连…

Windows 11上RTX 4090深度学习与大模型微调环境安装指南

【本文原作者&#xff1a;擎创科技资深产品专家 布博士】 在安装深度学习及大模型微调环境时&#xff0c;经历了多次反复操作&#xff08;如CUDA、cuDNN、PyTorch的安装与卸载&#xff09;。为了避免走弯路&#xff0c;总结了以下步骤&#xff1a; 步骤 1&#xff1a;显卡驱动…

11-sentinel利用nacos作持久化

本文介绍sentinel配置数据的持久化方法。由于sentinel官方并没有提供持久化功能&#xff0c;大家在测试过程中也能发现sentinel服务重启后&#xff0c;原来配置的数据就丢了&#xff0c;本文就是来处理这一问题的。 做好心理准备&#xff0c;我们要修改sentinel的源代码&#…

Qt消息对话框的实现

Widget.cpp #include "widget.h" #include "ui_widget.h"Widget::Widget(QWidget *parent): QWidget(parent), ui(new Ui::Widget) {ui->setupUi(this);this -> setWindowIcon(QIcon(":/picture/hp.jpg"));//设置窗口图标this -> setW…

Leetcode JAVA刷刷站(57)插入区间

一、题目概述 二、思路方向 为了解决这个问题&#xff0c;我们可以遍历给定的区间列表 intervals&#xff0c;并同时构建一个新的列表来存储最终的合并结果。遍历过程中&#xff0c;我们检查当前区间是否与 newInterval 重叠或相邻&#xff0c;并根据需要进行合并。如果不重叠…