Tools Usage and graph# UnsuperSegmentation Tool# Unsuper Segmnetation Tool, It is suitable for pixel-level defect detection tasks, and can identify the pixel-level fine structure of the target. digraph "OnlyTool: UnsuperSegmentation" { label="OnlyTool: UnsuperSegmentation"; rankdir="TB"; node [shape=ellipse, style=filled, color=blue, fillcolor=lightblue]; // Operator style "UnsuperSegmentation/comparator"; "UnsuperSegmentation/featmap_filter"; "UnsuperSegmentation/filter"; "UnsuperSegmentation/infer"; "UnsuperSegmentation/label_oper"; "UnsuperSegmentation/trt_infer"; "UnsuperSegmentation/view_tagger" node [shape=ellipse, style=filled, color=red, fillcolor=pink]; // Configurator style "UnsuperSegmentation/base_color_conf"; "UnsuperSegmentation/batch_size_conf"; "UnsuperSegmentation/featmap_filter.conf"; "UnsuperSegmentation/filter.conf"; "UnsuperSegmentation/image_mean_conf"; "UnsuperSegmentation/label_oper.conf"; "UnsuperSegmentation/statistician"; "UnsuperSegmentation/trainer"; "UnsuperSegmentation/trainer.conf"; "UnsuperSegmentation/trt_calibrator"; "UnsuperSegmentation/trt_calibrator.conf"; "UnsuperSegmentation/trt_float_converter"; "UnsuperSegmentation/trt_int8_converter"; "UnsuperSegmentation/unsuper_segmentation_infer_conf" node [shape=rect, style=filled, color=blue, fillcolor=lightblue]; // Property style "UnsuperSegmentation/feature_map"; "UnsuperSegmentation/hard_case"; "UnsuperSegmentation/match_result"; "UnsuperSegmentation/raw_pred"; "UnsuperSegmentation/tagged_polygons"; "UnsuperSegmentation/tagged_views"; "UnsuperSegmentation/truth" node [shape=point, style=filled, color=blue, fillcolor=lightblue]; // SingleVirtualInput property style "UnsuperSegmentation/image"; "UnsuperSegmentation/views" node [shape=invtriangle, style=filled, color=blue, fillcolor=lightblue]; // MultiVirtualInput property style node [shape=rect, style=dashed, color=blue, fillcolor=default]; // Output property style "UnsuperSegmentation/pred" node [shape=rect, style=filled, color=red, fillcolor=pink]; // Parameter style "UnsuperSegmentation/base_color"; "UnsuperSegmentation/batch_size"; "UnsuperSegmentation/featmap_filter.args"; "UnsuperSegmentation/filter.args"; "UnsuperSegmentation/image_mean"; "UnsuperSegmentation/label_oper.args"; "UnsuperSegmentation/model"; "UnsuperSegmentation/statistics"; "UnsuperSegmentation/trainer.args"; "UnsuperSegmentation/training_log"; "UnsuperSegmentation/trt_calib_result"; "UnsuperSegmentation/trt_calibrator.args"; "UnsuperSegmentation/trt_model"; "UnsuperSegmentation/unsuper_segmentation_infer" node [shape=point, style=filled, color=red, fillcolor=pink]; // SingleVirtualInput parameter style node [shape=invtriangle, style=filled, color=red, fillcolor=pink]; // MultiVirtualInput parameter style node [shape=rect, style=dashed, color=red, fillcolor=default]; // Output parameter style subgraph "cluster_UnsuperSegmentation" { label="UnsuperSegmentation"; "UnsuperSegmentation/base_color" [label="id: UnsuperSegmentation/base_color\ltype: visionflow::param::BaseColor\lupdate: 1970-01-01 08:00:00\l"]; "UnsuperSegmentation/base_color_conf" [label="id: UnsuperSegmentation/base_color_conf\ltype: visionflow::confs::BaseColorConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to config input images' \lbase color.\l"]; "UnsuperSegmentation/batch_size" [label="id: UnsuperSegmentation/batch_size\ltype: visionflow::param::InferenceBatchSize\lupdate: 1970-01-01 08:00:00\ldocs: Inference BatchSize, Currently \lonly contains batch size. It may \lneed to be refactored in the future.\l"]; "UnsuperSegmentation/batch_size_conf" [label="id: UnsuperSegmentation/batch_size_conf\ltype: visionflow::confs::InferenceBatchSizeConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to set inference \lbatch size.\l"]; "UnsuperSegmentation/comparator" [label="id: UnsuperSegmentation/comparator\ltype: visionflow::opers::RegionsMatcher\lupdate: 1970-01-01 08:00:00\ldocs: Operator to compare the predicted \lregions with the ground truth \lto get the category (in [TP, FP, \lTN, FN]) of each region.\l"]; "UnsuperSegmentation/featmap_filter" [label="id: UnsuperSegmentation/featmap_filter\ltype: visionflow::opers::SegmentationFeatureMapFilter\lupdate: 1970-01-01 08:00:00\ldocs: Operator to filter feature map \linto list of polygon regions.\l"]; "UnsuperSegmentation/featmap_filter.args" [label="id: UnsuperSegmentation/featmap_filter.args\ltype: visionflow::param::FeatureMapFilterParameters\lupdate: 1970-01-01 08:00:00\ldocs: Parameters to config the feature \lmap filter.\l"]; "UnsuperSegmentation/featmap_filter.conf" [label="id: UnsuperSegmentation/featmap_filter.conf\ltype: visionflow::confs::FeatureMapFilterConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator UI to config the \lfeature map filter.\l"]; "UnsuperSegmentation/feature_map" [label="id: UnsuperSegmentation/feature_map\ltype: visionflow::props::FeatureMap\lupdate: 1970-01-01 08:00:00\ldocs: A data structure used to store \lfeature maps detected by each \lalgorithm module.\l"]; "UnsuperSegmentation/filter" [label="id: UnsuperSegmentation/filter\ltype: visionflow::opers::PolygonsFilter\lupdate: 1970-01-01 08:00:00\ldocs: An operator to filter list of \lregions with some common thresholds \lor customized python filter script.\l"]; "UnsuperSegmentation/filter.args" [label="id: UnsuperSegmentation/filter.args\ltype: visionflow::param::PolygonsFilterParamters\lupdate: 1970-01-01 08:00:00\l"]; "UnsuperSegmentation/filter.conf" [label="id: UnsuperSegmentation/filter.conf\ltype: visionflow::confs::PolygonsFilterConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator UI to generate the \lpolygon filter args.\l"]; "UnsuperSegmentation/hard_case" [label="id: UnsuperSegmentation/hard_case\ltype: visionflow::props::PolygonRegionList\lupdate: 1970-01-01 08:00:00\ldocs: List structure to manage polygon \lregions.\l"]; "UnsuperSegmentation/image_mean" [label="id: UnsuperSegmentation/image_mean\ltype: visionflow::param::ImageMean\lupdate: 1970-01-01 08:00:00\ldocs: Image mean parameters\l"]; "UnsuperSegmentation/image_mean_conf" [label="id: UnsuperSegmentation/image_mean_conf\ltype: visionflow::confs::ImageMeanConf\lupdate: 1970-01-01 08:00:00\ldocs: ImageMeanConf Configurator class \lto compute the image mean values \lin the views.\l"]; "UnsuperSegmentation/infer" [label="id: UnsuperSegmentation/infer\ltype: visionflow::opers::UnsuperSegmentationInfer\lupdate: 1970-01-01 08:00:00\ldocs: Unsuper Segmentation Caffe inference \lengine.\l"]; "UnsuperSegmentation/label_oper" [label="id: UnsuperSegmentation/label_oper\ltype: visionflow::opers::UnsuperSegmentationLabeler\lupdate: 1970-01-01 08:00:00\ldocs: Annotate operator for unsuper \lsegmentation tool.\l"]; "UnsuperSegmentation/label_oper.args" [label="id: UnsuperSegmentation/label_oper.args\ltype: visionflow::param::BinaryPacks\lupdate: 1970-01-01 08:00:00\ldocs: A container to manage list of \lbinary datas.\l"]; "UnsuperSegmentation/label_oper.conf" [label="id: UnsuperSegmentation/label_oper.conf\ltype: visionflow::confs::CustomConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator class to generate \lany user-defined parameters\l"]; "UnsuperSegmentation/match_result" [label="id: UnsuperSegmentation/match_result\ltype: visionflow::props::RegionMatchResultList\lupdate: 1970-01-01 08:00:00\ldocs: A data structure to store list \lof RegionMatchResult.\l"]; "UnsuperSegmentation/model" [label="id: UnsuperSegmentation/model\ltype: visionflow::param::BinaryPacks\lupdate: 1970-01-01 08:00:00\ldocs: A container to manage list of \lbinary datas.\l"]; "UnsuperSegmentation/pred" [label="id: UnsuperSegmentation/pred\ltype: visionflow::props::PolygonRegionList\lupdate: 1970-01-01 08:00:00\ldocs: List structure to manage polygon \lregions.\l"]; "UnsuperSegmentation/raw_pred" [label="id: UnsuperSegmentation/raw_pred\ltype: visionflow::props::PolygonRegionList\lupdate: 1970-01-01 08:00:00\ldocs: List structure to manage polygon \lregions.\l"]; "UnsuperSegmentation/statistician" [label="id: UnsuperSegmentation/statistician\ltype: visionflow::confs::RegionMatchResultCounter\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to count region match \lresults.\l"]; "UnsuperSegmentation/statistics" [label="id: UnsuperSegmentation/statistics\ltype: visionflow::param::ModelEvaluationMetrics\lupdate: 1970-01-01 08:00:00\ldocs: A container to manage table.\l"]; "UnsuperSegmentation/tagged_polygons" [label="id: UnsuperSegmentation/tagged_polygons\ltype: visionflow::props::TaggedPolygonList\lupdate: 1970-01-01 08:00:00\ldocs: Property TaggedPolygonList implementation.\l"]; "UnsuperSegmentation/tagged_views" [label="id: UnsuperSegmentation/tagged_views\ltype: visionflow::props::ViewList\lupdate: 1970-01-01 08:00:00\ldocs: Property ViewList implementation.\l"]; "UnsuperSegmentation/trainer" [label="id: UnsuperSegmentation/trainer\ltype: visionflow::confs::UnsuperSegmentationTrainer\lupdate: 1970-01-01 08:00:00\ldocs: Model trainer for Unsuper Segmentation \lTool.\l"]; "UnsuperSegmentation/trainer.args" [label="id: UnsuperSegmentation/trainer.args\ltype: visionflow::param::UnsuperSegmentationTrainingParameters\lupdate: 1970-01-01 08:00:00\ldocs: Unsuper Segmentation Training \lParameters Group.\l"]; "UnsuperSegmentation/trainer.conf" [label="id: UnsuperSegmentation/trainer.conf\ltype: visionflow::confs::UnsuperSegmentationTrainerConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to set unsuper segmentation \ltrainer options.\l"]; "UnsuperSegmentation/training_log" [label="id: UnsuperSegmentation/training_log\ltype: visionflow::param::TrainingLog\lupdate: 1970-01-01 08:00:00\l"]; "UnsuperSegmentation/trt_calib_result" [label="id: UnsuperSegmentation/trt_calib_result\ltype: visionflow::param::BinaryPacks\lupdate: 1970-01-01 08:00:00\ldocs: A container to manage list of \lbinary datas.\l"]; "UnsuperSegmentation/trt_calibrator" [label="id: UnsuperSegmentation/trt_calibrator\ltype: visionflow::confs::TRTCalibrator\lupdate: 1970-01-01 08:00:00\ldocs: TensorRT Int8 Calibrator.\l"]; "UnsuperSegmentation/trt_calibrator.args" [label="id: UnsuperSegmentation/trt_calibrator.args\ltype: visionflow::param::TRTCalibParameters\lupdate: 1970-01-01 08:00:00\ldocs: TensorRT Int8 Calibrator Parameters\l"]; "UnsuperSegmentation/trt_calibrator.conf" [label="id: UnsuperSegmentation/trt_calibrator.conf\ltype: visionflow::confs::TRTCalibratorConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to config TensorRT \lInt8 Calibrator.\l"]; "UnsuperSegmentation/trt_float_converter" [label="id: UnsuperSegmentation/trt_float_converter\ltype: visionflow::confs::TRTFloatConverter\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to convert model \linto TensortRT float model.\l"]; "UnsuperSegmentation/trt_infer" [label="id: UnsuperSegmentation/trt_infer\ltype: visionflow::opers::UnsuperSegmentationTRTInfer\lupdate: 1970-01-01 08:00:00\ldocs: Unsuper Segmentation TensorRT \linference engine.\l"]; "UnsuperSegmentation/trt_int8_converter" [label="id: UnsuperSegmentation/trt_int8_converter\ltype: visionflow::confs::TRTInt8Converter\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to convert model \linto TensortRT int8 model.\l"]; "UnsuperSegmentation/trt_model" [label="id: UnsuperSegmentation/trt_model\ltype: visionflow::param::BinaryPacks\lupdate: 1970-01-01 08:00:00\ldocs: A container to manage list of \lbinary datas.\l"]; "UnsuperSegmentation/truth" [label="id: UnsuperSegmentation/truth\ltype: visionflow::props::PolygonRegionList\lupdate: 1970-01-01 08:00:00\ldocs: List structure to manage polygon \lregions.\l"]; "UnsuperSegmentation/unsuper_segmentation_infer" [label="id: UnsuperSegmentation/unsuper_segmentation_infer\ltype: visionflow::param::UnsuperSegmentationInferingParameters\lupdate: 1970-01-01 08:00:00\ldocs: UnsuperSegmentationInferingParameters\l"]; "UnsuperSegmentation/unsuper_segmentation_infer_conf" [label="id: UnsuperSegmentation/unsuper_segmentation_infer_conf\ltype: visionflow::confs::UnsuperSegmentationInferConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to set unsuper segmentation \linfer param\l"]; "UnsuperSegmentation/view_tagger" [label="id: UnsuperSegmentation/view_tagger\ltype: visionflow::opers::ViewTagger\lupdate: 1970-01-01 08:00:00\ldocs: Operator used to tag the views \lwith some already tagged polygons \lautomatically. The spilt_tag and \ltags of the most matched tagged_polygon \lselected using CIou will be set \lto the view, otherwise, the view \lwill remain its original spilt_tag \land tags info.\l"] } "UnsuperSegmentation/base_color" -> "UnsuperSegmentation/image_mean_conf"; "UnsuperSegmentation/base_color" -> "UnsuperSegmentation/trainer"; "UnsuperSegmentation/base_color_conf" -> "UnsuperSegmentation/base_color"; "UnsuperSegmentation/batch_size" -> "UnsuperSegmentation/infer"; "UnsuperSegmentation/batch_size" -> "UnsuperSegmentation/trt_float_converter"; "UnsuperSegmentation/batch_size" -> "UnsuperSegmentation/trt_int8_converter"; "UnsuperSegmentation/batch_size_conf" -> "UnsuperSegmentation/batch_size"; "UnsuperSegmentation/comparator" -> "UnsuperSegmentation/match_result"; "UnsuperSegmentation/featmap_filter" -> "UnsuperSegmentation/raw_pred"; "UnsuperSegmentation/featmap_filter.args" -> "UnsuperSegmentation/featmap_filter"; "UnsuperSegmentation/featmap_filter.conf" -> "UnsuperSegmentation/featmap_filter.args"; "UnsuperSegmentation/feature_map" -> "UnsuperSegmentation/featmap_filter"; "UnsuperSegmentation/filter" -> "UnsuperSegmentation/pred"; "UnsuperSegmentation/filter.args" -> "UnsuperSegmentation/filter"; "UnsuperSegmentation/filter.conf" -> "UnsuperSegmentation/filter.args"; "UnsuperSegmentation/hard_case" -> "UnsuperSegmentation/trainer"; "UnsuperSegmentation/image" -> "UnsuperSegmentation/image_mean_conf"; "UnsuperSegmentation/image" -> "UnsuperSegmentation/infer"; "UnsuperSegmentation/image" -> "UnsuperSegmentation/label_oper"; "UnsuperSegmentation/image" -> "UnsuperSegmentation/trainer"; "UnsuperSegmentation/image" -> "UnsuperSegmentation/trt_calibrator"; "UnsuperSegmentation/image" -> "UnsuperSegmentation/trt_infer"; "UnsuperSegmentation/image_mean" -> "UnsuperSegmentation/trainer"; "UnsuperSegmentation/image_mean_conf" -> "UnsuperSegmentation/image_mean"; "UnsuperSegmentation/infer" -> "UnsuperSegmentation/feature_map"; "UnsuperSegmentation/label_oper" -> "UnsuperSegmentation/hard_case"; "UnsuperSegmentation/label_oper" -> "UnsuperSegmentation/tagged_polygons"; "UnsuperSegmentation/label_oper" -> "UnsuperSegmentation/truth"; "UnsuperSegmentation/label_oper.args" -> "UnsuperSegmentation/label_oper"; "UnsuperSegmentation/label_oper.conf" -> "UnsuperSegmentation/label_oper.args"; "UnsuperSegmentation/model" -> "UnsuperSegmentation/infer"; "UnsuperSegmentation/model" -> "UnsuperSegmentation/trt_calibrator"; "UnsuperSegmentation/model" -> "UnsuperSegmentation/trt_float_converter"; "UnsuperSegmentation/model" -> "UnsuperSegmentation/trt_int8_converter"; "UnsuperSegmentation/pred" -> "UnsuperSegmentation/comparator"; "UnsuperSegmentation/pred" -> "UnsuperSegmentation/statistician"; "UnsuperSegmentation/raw_pred" -> "UnsuperSegmentation/filter"; "UnsuperSegmentation/statistician" -> "UnsuperSegmentation/statistics"; "UnsuperSegmentation/tagged_polygons" -> "UnsuperSegmentation/view_tagger"; "UnsuperSegmentation/tagged_views" -> "UnsuperSegmentation/comparator"; "UnsuperSegmentation/tagged_views" -> "UnsuperSegmentation/image_mean_conf"; "UnsuperSegmentation/tagged_views" -> "UnsuperSegmentation/statistician"; "UnsuperSegmentation/tagged_views" -> "UnsuperSegmentation/trainer"; "UnsuperSegmentation/trainer" -> "UnsuperSegmentation/model"; "UnsuperSegmentation/trainer" -> "UnsuperSegmentation/training_log"; "UnsuperSegmentation/trainer.args" -> "UnsuperSegmentation/trainer"; "UnsuperSegmentation/trainer.conf" -> "UnsuperSegmentation/trainer.args"; "UnsuperSegmentation/trt_calib_result" -> "UnsuperSegmentation/trt_int8_converter"; "UnsuperSegmentation/trt_calibrator" -> "UnsuperSegmentation/trt_calib_result"; "UnsuperSegmentation/trt_calibrator.args" -> "UnsuperSegmentation/trt_calibrator"; "UnsuperSegmentation/trt_calibrator.conf" -> "UnsuperSegmentation/trt_calibrator.args"; "UnsuperSegmentation/trt_float_converter" -> "UnsuperSegmentation/trt_model"; "UnsuperSegmentation/trt_infer" -> "UnsuperSegmentation/feature_map"; "UnsuperSegmentation/trt_int8_converter" -> "UnsuperSegmentation/trt_model"; "UnsuperSegmentation/trt_model" -> "UnsuperSegmentation/trt_infer"; "UnsuperSegmentation/truth" -> "UnsuperSegmentation/comparator"; "UnsuperSegmentation/truth" -> "UnsuperSegmentation/statistician"; "UnsuperSegmentation/truth" -> "UnsuperSegmentation/trainer"; "UnsuperSegmentation/unsuper_segmentation_infer" -> "UnsuperSegmentation/infer"; "UnsuperSegmentation/unsuper_segmentation_infer_conf" -> "UnsuperSegmentation/unsuper_segmentation_infer"; "UnsuperSegmentation/view_tagger" -> "UnsuperSegmentation/tagged_views"; "UnsuperSegmentation/views" -> "UnsuperSegmentation/infer"; "UnsuperSegmentation/views" -> "UnsuperSegmentation/trt_calibrator"; "UnsuperSegmentation/views" -> "UnsuperSegmentation/trt_infer"; "UnsuperSegmentation/views" -> "UnsuperSegmentation/view_tagger" } Input Tool# Input Tool, Use to add image files into the project from filesystem or camera. digraph "OnlyTool: Input" { label="OnlyTool: Input"; rankdir="TB"; node [shape=ellipse, style=filled, color=blue, fillcolor=lightblue]; // Operator style "Input/cam_image_grabber"; "Input/file_image_grabber" node [shape=ellipse, style=filled, color=red, fillcolor=pink]; // Configurator style "Input/cam_image_grabber.conf"; "Input/file_image_grabber.conf"; "Input/input_image.conf" node [shape=rect, style=filled, color=blue, fillcolor=lightblue]; // Property style node [shape=point, style=filled, color=blue, fillcolor=lightblue]; // SingleVirtualInput property style node [shape=invtriangle, style=filled, color=blue, fillcolor=lightblue]; // MultiVirtualInput property style node [shape=rect, style=dashed, color=blue, fillcolor=default]; // Output property style "Input/image"; "Input/image_info"; "Input/views" node [shape=rect, style=filled, color=red, fillcolor=pink]; // Parameter style "Input/cam_image_grabber.args"; "Input/file_image_grabber.args" node [shape=point, style=filled, color=red, fillcolor=pink]; // SingleVirtualInput parameter style node [shape=invtriangle, style=filled, color=red, fillcolor=pink]; // MultiVirtualInput parameter style node [shape=rect, style=dashed, color=red, fillcolor=default]; // Output parameter style "Input/input_image.param" subgraph "cluster_Input" { label="Input"; "Input/cam_image_grabber" [label="id: Input/cam_image_grabber\ltype: visionflow::opers::CameraImageGrabber\lupdate: 1970-01-01 08:00:00\ldocs: Operator to grab image from camera.\l"]; "Input/cam_image_grabber.args" [label="id: Input/cam_image_grabber.args\ltype: visionflow::param::BinaryPacks\lupdate: 1970-01-01 08:00:00\ldocs: A container to manage list of \lbinary datas.\l"]; "Input/cam_image_grabber.conf" [label="id: Input/cam_image_grabber.conf\ltype: visionflow::confs::CustomConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator class to generate \lany user-defined parameters\l"]; "Input/file_image_grabber" [label="id: Input/file_image_grabber\ltype: visionflow::opers::FileImageGrabber\lupdate: 1970-01-01 08:00:00\ldocs: Operator to grab image from file.\l"]; "Input/file_image_grabber.args" [label="id: Input/file_image_grabber.args\ltype: visionflow::param::BinaryPacks\lupdate: 1970-01-01 08:00:00\ldocs: A container to manage list of \lbinary datas.\l"]; "Input/file_image_grabber.conf" [label="id: Input/file_image_grabber.conf\ltype: visionflow::confs::CustomConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator class to generate \lany user-defined parameters\l"]; "Input/image" [label="id: Input/image\ltype: visionflow::props::Image\lupdate: 1970-01-01 08:00:00\ldocs: Property Image implementation.\l"]; "Input/image_info" [label="id: Input/image_info\ltype: visionflow::props::RawImageInfo\lupdate: 1970-01-01 08:00:00\ldocs: Raw Image information.\l"]; "Input/input_image.conf" [label="id: Input/input_image.conf\ltype: visionflow::confs::InputImageConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator class to generate \linput image parameter.\l"]; "Input/input_image.param" [label="id: Input/input_image.param\ltype: visionflow::param::InputImageParam\lupdate: 1970-01-01 08:00:00\ldocs: 输入图像相关参数,用\l于控制工程的输入图像\l格式规范.\l"]; "Input/views" [label="id: Input/views\ltype: visionflow::props::ViewList\lupdate: 1970-01-01 08:00:00\ldocs: Property ViewList implementation.\l"] } "Input/cam_image_grabber" -> "Input/image"; "Input/cam_image_grabber" -> "Input/image_info"; "Input/cam_image_grabber" -> "Input/views"; "Input/cam_image_grabber.args" -> "Input/cam_image_grabber"; "Input/cam_image_grabber.conf" -> "Input/cam_image_grabber.args"; "Input/file_image_grabber" -> "Input/image"; "Input/file_image_grabber" -> "Input/image_info"; "Input/file_image_grabber" -> "Input/views"; "Input/file_image_grabber.args" -> "Input/file_image_grabber"; "Input/file_image_grabber.conf" -> "Input/file_image_grabber.args"; "Input/input_image.conf" -> "Input/input_image.param"; "Input/input_image.param" -> "Input/cam_image_grabber"; "Input/input_image.param" -> "Input/file_image_grabber" } OCR Tool# OCR Tool, suitable for recognizing various characters. digraph "OnlyTool: OCR" { label="OnlyTool: OCR"; rankdir="TB"; node [shape=ellipse, style=filled, color=blue, fillcolor=lightblue]; // Operator style "OCR/comparator"; "OCR/filter"; "OCR/infer"; "OCR/label_oper"; "OCR/string_matcher"; "OCR/truth_string_matcher"; "OCR/view_tagger" node [shape=ellipse, style=filled, color=red, fillcolor=pink]; // Configurator style "OCR/base_color_conf"; "OCR/batch_size_conf"; "OCR/filter.conf"; "OCR/image_mean_conf"; "OCR/infer.conf"; "OCR/label_classes.conf"; "OCR/label_oper.conf"; "OCR/statistician"; "OCR/strings_statistician"; "OCR/templates_conf"; "OCR/trainer"; "OCR/trainer.conf"; "OCR/universal_conf"; "OCR/universal_model.conf" node [shape=rect, style=filled, color=blue, fillcolor=lightblue]; // Property style "OCR/feature_map"; "OCR/mask"; "OCR/match_result"; "OCR/tagged_polygons"; "OCR/tagged_views"; "OCR/truth"; "OCR/truth.strings" node [shape=point, style=filled, color=blue, fillcolor=lightblue]; // SingleVirtualInput property style "OCR/image"; "OCR/views" node [shape=invtriangle, style=filled, color=blue, fillcolor=lightblue]; // MultiVirtualInput property style node [shape=rect, style=dashed, color=blue, fillcolor=default]; // Output property style "OCR/pred.characters"; "OCR/pred.strings" node [shape=rect, style=filled, color=red, fillcolor=pink]; // Parameter style "OCR/base_color"; "OCR/batch_size"; "OCR/filter.args"; "OCR/image_mean"; "OCR/infer.args"; "OCR/label_oper.args"; "OCR/model"; "OCR/statistics"; "OCR/strings_statistics"; "OCR/templates"; "OCR/trainer.args"; "OCR/training_log"; "OCR/universal_model.args" node [shape=point, style=filled, color=red, fillcolor=pink]; // SingleVirtualInput parameter style node [shape=invtriangle, style=filled, color=red, fillcolor=pink]; // MultiVirtualInput parameter style node [shape=rect, style=dashed, color=red, fillcolor=default]; // Output parameter style "OCR/classes" subgraph "cluster_OCR" { label="OCR"; "OCR/base_color" [label="id: OCR/base_color\ltype: visionflow::param::BaseColor\lupdate: 1970-01-01 08:00:00\l"]; "OCR/base_color_conf" [label="id: OCR/base_color_conf\ltype: visionflow::confs::BaseColorConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to config input images' \lbase color.\l"]; "OCR/batch_size" [label="id: OCR/batch_size\ltype: visionflow::param::InferenceBatchSize\lupdate: 1970-01-01 08:00:00\ldocs: Inference BatchSize, Currently \lonly contains batch size. It may \lneed to be refactored in the future.\l"]; "OCR/batch_size_conf" [label="id: OCR/batch_size_conf\ltype: visionflow::confs::InferenceBatchSizeConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to set inference \lbatch size.\l"]; "OCR/classes" [label="id: OCR/classes\ltype: visionflow::param::LabelClasses\lupdate: 1970-01-01 08:00:00\ldocs: A container to manage label classes.\l"]; "OCR/comparator" [label="id: OCR/comparator\ltype: visionflow::opers::RegionsMatcher\lupdate: 1970-01-01 08:00:00\ldocs: Operator to compare the predicted \lregions with the ground truth \lto get the category (in [TP, FP, \lTN, FN]) of each region.\l"]; "OCR/feature_map" [label="id: OCR/feature_map\ltype: visionflow::props::FeatureMap\lupdate: 1970-01-01 08:00:00\ldocs: A data structure used to store \lfeature maps detected by each \lalgorithm module.\l"]; "OCR/filter" [label="id: OCR/filter\ltype: visionflow::opers::OCRFilter\lupdate: 1970-01-01 08:00:00\ldocs: OCR filter.\l"]; "OCR/filter.args" [label="id: OCR/filter.args\ltype: visionflow::param::OCRFilterParameters\lupdate: 1970-01-01 08:00:00\l"]; "OCR/filter.conf" [label="id: OCR/filter.conf\ltype: visionflow::confs::OCRFeatureFilterConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator UI to config OCR \lnms and filter parameters.\l"]; "OCR/image_mean" [label="id: OCR/image_mean\ltype: visionflow::param::ImageMean\lupdate: 1970-01-01 08:00:00\ldocs: Image mean parameters\l"]; "OCR/image_mean_conf" [label="id: OCR/image_mean_conf\ltype: visionflow::confs::ImageMeanConf\lupdate: 1970-01-01 08:00:00\ldocs: ImageMeanConf Configurator class \lto compute the image mean values \lin the views.\l"]; "OCR/infer" [label="id: OCR/infer\ltype: visionflow::opers::OCRInfer\lupdate: 1970-01-01 08:00:00\ldocs: OCR Caffe inference engine.\l"]; "OCR/infer.args" [label="id: OCR/infer.args\ltype: visionflow::param::OCRInferParameters\lupdate: 1970-01-01 08:00:00\l"]; "OCR/infer.conf" [label="id: OCR/infer.conf\ltype: visionflow::confs::OCRInferConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to set OCR inference \lparameters.\l"]; "OCR/label_classes.conf" [label="id: OCR/label_classes.conf\ltype: visionflow::confs::LabelClassesConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator class to generate \llabel classes parameter.\l"]; "OCR/label_oper" [label="id: OCR/label_oper\ltype: visionflow::opers::OCRLabeler\lupdate: 1970-01-01 08:00:00\ldocs: Annotate operator for OCR tool.\l"]; "OCR/label_oper.args" [label="id: OCR/label_oper.args\ltype: visionflow::param::BinaryPacks\lupdate: 1970-01-01 08:00:00\ldocs: A container to manage list of \lbinary datas.\l"]; "OCR/label_oper.conf" [label="id: OCR/label_oper.conf\ltype: visionflow::confs::CustomConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator class to generate \lany user-defined parameters\l"]; "OCR/mask" [label="id: OCR/mask\ltype: visionflow::props::PolygonRegionList\lupdate: 1970-01-01 08:00:00\ldocs: List structure to manage polygon \lregions.\l"]; "OCR/match_result" [label="id: OCR/match_result\ltype: visionflow::props::RegionMatchResultList\lupdate: 1970-01-01 08:00:00\ldocs: A data structure to store list \lof RegionMatchResult.\l"]; "OCR/model" [label="id: OCR/model\ltype: visionflow::param::BinaryPacks\lupdate: 1970-01-01 08:00:00\ldocs: A container to manage list of \lbinary datas.\l"]; "OCR/pred.characters" [label="id: OCR/pred.characters\ltype: visionflow::props::PolygonRegionList\lupdate: 1970-01-01 08:00:00\ldocs: List structure to manage polygon \lregions.\l"]; "OCR/pred.strings" [label="id: OCR/pred.strings\ltype: visionflow::props::PolygonRegionList\lupdate: 1970-01-01 08:00:00\ldocs: List structure to manage polygon \lregions.\l"]; "OCR/statistician" [label="id: OCR/statistician\ltype: visionflow::confs::OCRRegionMatchResultCounter\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to count ocr region \lmatch results.\l"]; "OCR/statistics" [label="id: OCR/statistics\ltype: visionflow::param::ModelEvaluationMetrics\lupdate: 1970-01-01 08:00:00\ldocs: A container to manage table.\l"]; "OCR/string_matcher" [label="id: OCR/string_matcher\ltype: visionflow::opers::OCRStringMatcher\lupdate: 1970-01-01 08:00:00\ldocs: OCR string matcher.\l"]; "OCR/strings_statistician" [label="id: OCR/strings_statistician\ltype: visionflow::confs::RegionMatchResultCounter\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to count region match \lresults.\l"]; "OCR/strings_statistics" [label="id: OCR/strings_statistics\ltype: visionflow::param::ModelEvaluationMetrics\lupdate: 1970-01-01 08:00:00\ldocs: A container to manage table.\l"]; "OCR/tagged_polygons" [label="id: OCR/tagged_polygons\ltype: visionflow::props::TaggedPolygonList\lupdate: 1970-01-01 08:00:00\ldocs: Property TaggedPolygonList implementation.\l"]; "OCR/tagged_views" [label="id: OCR/tagged_views\ltype: visionflow::props::ViewList\lupdate: 1970-01-01 08:00:00\ldocs: Property ViewList implementation.\l"]; "OCR/templates" [label="id: OCR/templates\ltype: visionflow::param::OCRTemplates\lupdate: 1970-01-01 08:00:00\l"]; "OCR/templates_conf" [label="id: OCR/templates_conf\ltype: visionflow::confs::OCRTemplateConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator UI to config OCR \lstring match templates.\l"]; "OCR/trainer" [label="id: OCR/trainer\ltype: visionflow::confs::OCRTrainer\lupdate: 1970-01-01 08:00:00\ldocs: OCR model trainer.\l"]; "OCR/trainer.args" [label="id: OCR/trainer.args\ltype: visionflow::param::OCRTrainingParameters\lupdate: 1970-01-01 08:00:00\l"]; "OCR/trainer.conf" [label="id: OCR/trainer.conf\ltype: visionflow::confs::OCRTrainerConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to set OCR trainer \loptions.\l"]; "OCR/training_log" [label="id: OCR/training_log\ltype: visionflow::param::TrainingLog\lupdate: 1970-01-01 08:00:00\l"]; "OCR/truth" [label="id: OCR/truth\ltype: visionflow::props::PolygonRegionList\lupdate: 1970-01-01 08:00:00\ldocs: List structure to manage polygon \lregions.\l"]; "OCR/truth.strings" [label="id: OCR/truth.strings\ltype: visionflow::props::PolygonRegionList\lupdate: 1970-01-01 08:00:00\ldocs: List structure to manage polygon \lregions.\l"]; "OCR/truth_string_matcher" [label="id: OCR/truth_string_matcher\ltype: visionflow::opers::OCRStringMatcher\lupdate: 1970-01-01 08:00:00\ldocs: OCR string matcher.\l"]; "OCR/universal_conf" [label="id: OCR/universal_conf\ltype: visionflow::confs::OCRUniversalModelConf\lupdate: 1970-01-01 08:00:00\ldocs: OCR universal model configurator.\l"]; "OCR/universal_model.args" [label="id: OCR/universal_model.args\ltype: visionflow::param::OCRUniversalModelParameters\lupdate: 1970-01-01 08:00:00\l"]; "OCR/universal_model.conf" [label="id: OCR/universal_model.conf\ltype: visionflow::confs::OCRUniversalModelParamConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to set OCR universal \lmodel parameters.\l"]; "OCR/view_tagger" [label="id: OCR/view_tagger\ltype: visionflow::opers::ViewTagger\lupdate: 1970-01-01 08:00:00\ldocs: Operator used to tag the views \lwith some already tagged polygons \lautomatically. The spilt_tag and \ltags of the most matched tagged_polygon \lselected using CIou will be set \lto the view, otherwise, the view \lwill remain its original spilt_tag \land tags info.\l"] } "OCR/base_color" -> "OCR/image_mean_conf"; "OCR/base_color" -> "OCR/trainer"; "OCR/base_color_conf" -> "OCR/base_color"; "OCR/batch_size" -> "OCR/infer"; "OCR/batch_size_conf" -> "OCR/batch_size"; "OCR/classes" -> "OCR/label_oper"; "OCR/classes" -> "OCR/trainer"; "OCR/comparator" -> "OCR/match_result"; "OCR/feature_map" -> "OCR/filter"; "OCR/filter" -> "OCR/pred.characters"; "OCR/filter.args" -> "OCR/filter"; "OCR/filter.conf" -> "OCR/filter.args"; "OCR/image" -> "OCR/image_mean_conf"; "OCR/image" -> "OCR/infer"; "OCR/image" -> "OCR/label_oper"; "OCR/image" -> "OCR/trainer"; "OCR/image_mean" -> "OCR/trainer"; "OCR/image_mean_conf" -> "OCR/image_mean"; "OCR/infer" -> "OCR/feature_map"; "OCR/infer.args" -> "OCR/infer"; "OCR/infer.conf" -> "OCR/infer.args"; "OCR/label_classes.conf" -> "OCR/classes"; "OCR/label_oper" -> "OCR/mask"; "OCR/label_oper" -> "OCR/tagged_polygons"; "OCR/label_oper" -> "OCR/truth"; "OCR/label_oper.args" -> "OCR/label_oper"; "OCR/label_oper.conf" -> "OCR/label_oper.args"; "OCR/mask" -> "OCR/trainer"; "OCR/model" -> "OCR/filter"; "OCR/model" -> "OCR/infer"; "OCR/model" -> "OCR/string_matcher"; "OCR/model" -> "OCR/truth_string_matcher"; "OCR/pred.characters" -> "OCR/comparator"; "OCR/pred.characters" -> "OCR/statistician"; "OCR/pred.characters" -> "OCR/string_matcher"; "OCR/pred.strings" -> "OCR/strings_statistician"; "OCR/statistician" -> "OCR/statistics"; "OCR/string_matcher" -> "OCR/pred.strings"; "OCR/strings_statistician" -> "OCR/strings_statistics"; "OCR/tagged_polygons" -> "OCR/view_tagger"; "OCR/tagged_views" -> "OCR/comparator"; "OCR/tagged_views" -> "OCR/image_mean_conf"; "OCR/tagged_views" -> "OCR/statistician"; "OCR/tagged_views" -> "OCR/strings_statistician"; "OCR/tagged_views" -> "OCR/trainer"; "OCR/templates" -> "OCR/string_matcher"; "OCR/templates" -> "OCR/truth_string_matcher"; "OCR/templates_conf" -> "OCR/templates"; "OCR/trainer" -> "OCR/model"; "OCR/trainer" -> "OCR/training_log"; "OCR/trainer.args" -> "OCR/trainer"; "OCR/trainer.conf" -> "OCR/trainer.args"; "OCR/truth" -> "OCR/comparator"; "OCR/truth" -> "OCR/statistician"; "OCR/truth" -> "OCR/trainer"; "OCR/truth" -> "OCR/truth_string_matcher"; "OCR/truth.strings" -> "OCR/strings_statistician"; "OCR/truth_string_matcher" -> "OCR/truth.strings"; "OCR/universal_conf" -> "OCR/model"; "OCR/universal_model.args" -> "OCR/universal_conf"; "OCR/universal_model.conf" -> "OCR/universal_model.args"; "OCR/view_tagger" -> "OCR/tagged_views"; "OCR/views" -> "OCR/filter"; "OCR/views" -> "OCR/infer"; "OCR/views" -> "OCR/string_matcher"; "OCR/views" -> "OCR/truth_string_matcher"; "OCR/views" -> "OCR/view_tagger" } Segmentation Tool# Segmnetation Tool, It is suitable for pixel-level defect detection tasks, and can identify the pixel-level fine structure of the target. digraph "OnlyTool: Segmentation" { label="OnlyTool: Segmentation"; rankdir="TB"; node [shape=ellipse, style=filled, color=blue, fillcolor=lightblue]; // Operator style "Segmentation/comparator"; "Segmentation/featmap_filter"; "Segmentation/filter"; "Segmentation/infer"; "Segmentation/label_oper"; "Segmentation/trt_infer"; "Segmentation/view_tagger" node [shape=ellipse, style=filled, color=red, fillcolor=pink]; // Configurator style "Segmentation/base_color_conf"; "Segmentation/batch_size_conf"; "Segmentation/featmap_filter.conf"; "Segmentation/filter.conf"; "Segmentation/image_mean_conf"; "Segmentation/label_classes.conf"; "Segmentation/label_oper.conf"; "Segmentation/statistician"; "Segmentation/trainer"; "Segmentation/trainer.conf"; "Segmentation/trt_calibrator"; "Segmentation/trt_calibrator.conf"; "Segmentation/trt_float_converter"; "Segmentation/trt_int8_converter" node [shape=rect, style=filled, color=blue, fillcolor=lightblue]; // Property style "Segmentation/feature_map"; "Segmentation/hard_case"; "Segmentation/match_result"; "Segmentation/raw_pred"; "Segmentation/tagged_polygons"; "Segmentation/tagged_views"; "Segmentation/truth" node [shape=point, style=filled, color=blue, fillcolor=lightblue]; // SingleVirtualInput property style "Segmentation/image"; "Segmentation/views" node [shape=invtriangle, style=filled, color=blue, fillcolor=lightblue]; // MultiVirtualInput property style node [shape=rect, style=dashed, color=blue, fillcolor=default]; // Output property style "Segmentation/pred" node [shape=rect, style=filled, color=red, fillcolor=pink]; // Parameter style "Segmentation/base_color"; "Segmentation/batch_size"; "Segmentation/featmap_filter.args"; "Segmentation/filter.args"; "Segmentation/image_mean"; "Segmentation/label_oper.args"; "Segmentation/model"; "Segmentation/statistics"; "Segmentation/trainer.args"; "Segmentation/training_log"; "Segmentation/trt_calib_result"; "Segmentation/trt_calibrator.args"; "Segmentation/trt_model" node [shape=point, style=filled, color=red, fillcolor=pink]; // SingleVirtualInput parameter style node [shape=invtriangle, style=filled, color=red, fillcolor=pink]; // MultiVirtualInput parameter style node [shape=rect, style=dashed, color=red, fillcolor=default]; // Output parameter style "Segmentation/classes" subgraph "cluster_Segmentation" { label="Segmentation"; "Segmentation/base_color" [label="id: Segmentation/base_color\ltype: visionflow::param::BaseColor\lupdate: 1970-01-01 08:00:00\l"]; "Segmentation/base_color_conf" [label="id: Segmentation/base_color_conf\ltype: visionflow::confs::BaseColorConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to config input images' \lbase color.\l"]; "Segmentation/batch_size" [label="id: Segmentation/batch_size\ltype: visionflow::param::InferenceBatchSize\lupdate: 1970-01-01 08:00:00\ldocs: Inference BatchSize, Currently \lonly contains batch size. It may \lneed to be refactored in the future.\l"]; "Segmentation/batch_size_conf" [label="id: Segmentation/batch_size_conf\ltype: visionflow::confs::InferenceBatchSizeConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to set inference \lbatch size.\l"]; "Segmentation/classes" [label="id: Segmentation/classes\ltype: visionflow::param::LabelClasses\lupdate: 1970-01-01 08:00:00\ldocs: A container to manage label classes.\l"]; "Segmentation/comparator" [label="id: Segmentation/comparator\ltype: visionflow::opers::RegionsMatcher\lupdate: 1970-01-01 08:00:00\ldocs: Operator to compare the predicted \lregions with the ground truth \lto get the category (in [TP, FP, \lTN, FN]) of each region.\l"]; "Segmentation/featmap_filter" [label="id: Segmentation/featmap_filter\ltype: visionflow::opers::SegmentationFeatureMapFilter\lupdate: 1970-01-01 08:00:00\ldocs: Operator to filter feature map \linto list of polygon regions.\l"]; "Segmentation/featmap_filter.args" [label="id: Segmentation/featmap_filter.args\ltype: visionflow::param::FeatureMapFilterParameters\lupdate: 1970-01-01 08:00:00\ldocs: Parameters to config the feature \lmap filter.\l"]; "Segmentation/featmap_filter.conf" [label="id: Segmentation/featmap_filter.conf\ltype: visionflow::confs::FeatureMapFilterConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator UI to config the \lfeature map filter.\l"]; "Segmentation/feature_map" [label="id: Segmentation/feature_map\ltype: visionflow::props::FeatureMap\lupdate: 1970-01-01 08:00:00\ldocs: A data structure used to store \lfeature maps detected by each \lalgorithm module.\l"]; "Segmentation/filter" [label="id: Segmentation/filter\ltype: visionflow::opers::PolygonsFilter\lupdate: 1970-01-01 08:00:00\ldocs: An operator to filter list of \lregions with some common thresholds \lor customized python filter script.\l"]; "Segmentation/filter.args" [label="id: Segmentation/filter.args\ltype: visionflow::param::PolygonsFilterParamters\lupdate: 1970-01-01 08:00:00\l"]; "Segmentation/filter.conf" [label="id: Segmentation/filter.conf\ltype: visionflow::confs::PolygonsFilterConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator UI to generate the \lpolygon filter args.\l"]; "Segmentation/hard_case" [label="id: Segmentation/hard_case\ltype: visionflow::props::PolygonRegionList\lupdate: 1970-01-01 08:00:00\ldocs: List structure to manage polygon \lregions.\l"]; "Segmentation/image_mean" [label="id: Segmentation/image_mean\ltype: visionflow::param::ImageMean\lupdate: 1970-01-01 08:00:00\ldocs: Image mean parameters\l"]; "Segmentation/image_mean_conf" [label="id: Segmentation/image_mean_conf\ltype: visionflow::confs::ImageMeanConf\lupdate: 1970-01-01 08:00:00\ldocs: ImageMeanConf Configurator class \lto compute the image mean values \lin the views.\l"]; "Segmentation/infer" [label="id: Segmentation/infer\ltype: visionflow::opers::SegmentationInfer\lupdate: 1970-01-01 08:00:00\ldocs: Segmentation Caffe inference engine.\l"]; "Segmentation/label_classes.conf" [label="id: Segmentation/label_classes.conf\ltype: visionflow::confs::LabelClassesConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator class to generate \llabel classes parameter.\l"]; "Segmentation/label_oper" [label="id: Segmentation/label_oper\ltype: visionflow::opers::SegmentationLabeler\lupdate: 1970-01-01 08:00:00\ldocs: Annotate operator for segmentation \ltool.\l"]; "Segmentation/label_oper.args" [label="id: Segmentation/label_oper.args\ltype: visionflow::param::BinaryPacks\lupdate: 1970-01-01 08:00:00\ldocs: A container to manage list of \lbinary datas.\l"]; "Segmentation/label_oper.conf" [label="id: Segmentation/label_oper.conf\ltype: visionflow::confs::CustomConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator class to generate \lany user-defined parameters\l"]; "Segmentation/match_result" [label="id: Segmentation/match_result\ltype: visionflow::props::RegionMatchResultList\lupdate: 1970-01-01 08:00:00\ldocs: A data structure to store list \lof RegionMatchResult.\l"]; "Segmentation/model" [label="id: Segmentation/model\ltype: visionflow::param::BinaryPacks\lupdate: 1970-01-01 08:00:00\ldocs: A container to manage list of \lbinary datas.\l"]; "Segmentation/pred" [label="id: Segmentation/pred\ltype: visionflow::props::PolygonRegionList\lupdate: 1970-01-01 08:00:00\ldocs: List structure to manage polygon \lregions.\l"]; "Segmentation/raw_pred" [label="id: Segmentation/raw_pred\ltype: visionflow::props::PolygonRegionList\lupdate: 1970-01-01 08:00:00\ldocs: List structure to manage polygon \lregions.\l"]; "Segmentation/statistician" [label="id: Segmentation/statistician\ltype: visionflow::confs::RegionMatchResultCounter\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to count region match \lresults.\l"]; "Segmentation/statistics" [label="id: Segmentation/statistics\ltype: visionflow::param::ModelEvaluationMetrics\lupdate: 1970-01-01 08:00:00\ldocs: A container to manage table.\l"]; "Segmentation/tagged_polygons" [label="id: Segmentation/tagged_polygons\ltype: visionflow::props::TaggedPolygonList\lupdate: 1970-01-01 08:00:00\ldocs: Property TaggedPolygonList implementation.\l"]; "Segmentation/tagged_views" [label="id: Segmentation/tagged_views\ltype: visionflow::props::ViewList\lupdate: 1970-01-01 08:00:00\ldocs: Property ViewList implementation.\l"]; "Segmentation/trainer" [label="id: Segmentation/trainer\ltype: visionflow::confs::SegmentationTrainer\lupdate: 1970-01-01 08:00:00\ldocs: Model trainer for Segmentation \lTool.\l"]; "Segmentation/trainer.args" [label="id: Segmentation/trainer.args\ltype: visionflow::param::SegmentationTrainingParameters\lupdate: 1970-01-01 08:00:00\ldocs: Segmentation Training Parameters \lGroup.\l"]; "Segmentation/trainer.conf" [label="id: Segmentation/trainer.conf\ltype: visionflow::confs::SegmentationTrainerConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to set segmentation \ltrainer options.\l"]; "Segmentation/training_log" [label="id: Segmentation/training_log\ltype: visionflow::param::TrainingLog\lupdate: 1970-01-01 08:00:00\l"]; "Segmentation/trt_calib_result" [label="id: Segmentation/trt_calib_result\ltype: visionflow::param::BinaryPacks\lupdate: 1970-01-01 08:00:00\ldocs: A container to manage list of \lbinary datas.\l"]; "Segmentation/trt_calibrator" [label="id: Segmentation/trt_calibrator\ltype: visionflow::confs::TRTCalibrator\lupdate: 1970-01-01 08:00:00\ldocs: TensorRT Int8 Calibrator.\l"]; "Segmentation/trt_calibrator.args" [label="id: Segmentation/trt_calibrator.args\ltype: visionflow::param::TRTCalibParameters\lupdate: 1970-01-01 08:00:00\ldocs: TensorRT Int8 Calibrator Parameters\l"]; "Segmentation/trt_calibrator.conf" [label="id: Segmentation/trt_calibrator.conf\ltype: visionflow::confs::TRTCalibratorConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to config TensorRT \lInt8 Calibrator.\l"]; "Segmentation/trt_float_converter" [label="id: Segmentation/trt_float_converter\ltype: visionflow::confs::TRTFloatConverter\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to convert model \linto TensortRT float model.\l"]; "Segmentation/trt_infer" [label="id: Segmentation/trt_infer\ltype: visionflow::opers::SegmentationTRTInfer\lupdate: 1970-01-01 08:00:00\ldocs: Segmentation TensorRT inference \lengine.\l"]; "Segmentation/trt_int8_converter" [label="id: Segmentation/trt_int8_converter\ltype: visionflow::confs::TRTInt8Converter\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to convert model \linto TensortRT int8 model.\l"]; "Segmentation/trt_model" [label="id: Segmentation/trt_model\ltype: visionflow::param::BinaryPacks\lupdate: 1970-01-01 08:00:00\ldocs: A container to manage list of \lbinary datas.\l"]; "Segmentation/truth" [label="id: Segmentation/truth\ltype: visionflow::props::PolygonRegionList\lupdate: 1970-01-01 08:00:00\ldocs: List structure to manage polygon \lregions.\l"]; "Segmentation/view_tagger" [label="id: Segmentation/view_tagger\ltype: visionflow::opers::ViewTagger\lupdate: 1970-01-01 08:00:00\ldocs: Operator used to tag the views \lwith some already tagged polygons \lautomatically. The spilt_tag and \ltags of the most matched tagged_polygon \lselected using CIou will be set \lto the view, otherwise, the view \lwill remain its original spilt_tag \land tags info.\l"] } "Segmentation/base_color" -> "Segmentation/image_mean_conf"; "Segmentation/base_color" -> "Segmentation/trainer"; "Segmentation/base_color_conf" -> "Segmentation/base_color"; "Segmentation/batch_size" -> "Segmentation/infer"; "Segmentation/batch_size" -> "Segmentation/trt_float_converter"; "Segmentation/batch_size" -> "Segmentation/trt_int8_converter"; "Segmentation/batch_size_conf" -> "Segmentation/batch_size"; "Segmentation/classes" -> "Segmentation/label_oper"; "Segmentation/classes" -> "Segmentation/trainer"; "Segmentation/comparator" -> "Segmentation/match_result"; "Segmentation/featmap_filter" -> "Segmentation/raw_pred"; "Segmentation/featmap_filter.args" -> "Segmentation/featmap_filter"; "Segmentation/featmap_filter.conf" -> "Segmentation/featmap_filter.args"; "Segmentation/feature_map" -> "Segmentation/featmap_filter"; "Segmentation/filter" -> "Segmentation/pred"; "Segmentation/filter.args" -> "Segmentation/filter"; "Segmentation/filter.conf" -> "Segmentation/filter.args"; "Segmentation/hard_case" -> "Segmentation/trainer"; "Segmentation/image" -> "Segmentation/image_mean_conf"; "Segmentation/image" -> "Segmentation/infer"; "Segmentation/image" -> "Segmentation/label_oper"; "Segmentation/image" -> "Segmentation/trainer"; "Segmentation/image" -> "Segmentation/trt_calibrator"; "Segmentation/image" -> "Segmentation/trt_infer"; "Segmentation/image_mean" -> "Segmentation/trainer"; "Segmentation/image_mean_conf" -> "Segmentation/image_mean"; "Segmentation/infer" -> "Segmentation/feature_map"; "Segmentation/label_classes.conf" -> "Segmentation/classes"; "Segmentation/label_oper" -> "Segmentation/hard_case"; "Segmentation/label_oper" -> "Segmentation/tagged_polygons"; "Segmentation/label_oper" -> "Segmentation/truth"; "Segmentation/label_oper.args" -> "Segmentation/label_oper"; "Segmentation/label_oper.conf" -> "Segmentation/label_oper.args"; "Segmentation/model" -> "Segmentation/infer"; "Segmentation/model" -> "Segmentation/trt_calibrator"; "Segmentation/model" -> "Segmentation/trt_float_converter"; "Segmentation/model" -> "Segmentation/trt_int8_converter"; "Segmentation/pred" -> "Segmentation/comparator"; "Segmentation/pred" -> "Segmentation/statistician"; "Segmentation/raw_pred" -> "Segmentation/filter"; "Segmentation/statistician" -> "Segmentation/statistics"; "Segmentation/tagged_polygons" -> "Segmentation/view_tagger"; "Segmentation/tagged_views" -> "Segmentation/comparator"; "Segmentation/tagged_views" -> "Segmentation/image_mean_conf"; "Segmentation/tagged_views" -> "Segmentation/statistician"; "Segmentation/tagged_views" -> "Segmentation/trainer"; "Segmentation/trainer" -> "Segmentation/model"; "Segmentation/trainer" -> "Segmentation/training_log"; "Segmentation/trainer.args" -> "Segmentation/trainer"; "Segmentation/trainer.conf" -> "Segmentation/trainer.args"; "Segmentation/trt_calib_result" -> "Segmentation/trt_int8_converter"; "Segmentation/trt_calibrator" -> "Segmentation/trt_calib_result"; "Segmentation/trt_calibrator.args" -> "Segmentation/trt_calibrator"; "Segmentation/trt_calibrator.conf" -> "Segmentation/trt_calibrator.args"; "Segmentation/trt_float_converter" -> "Segmentation/trt_model"; "Segmentation/trt_infer" -> "Segmentation/feature_map"; "Segmentation/trt_int8_converter" -> "Segmentation/trt_model"; "Segmentation/trt_model" -> "Segmentation/trt_infer"; "Segmentation/truth" -> "Segmentation/comparator"; "Segmentation/truth" -> "Segmentation/statistician"; "Segmentation/truth" -> "Segmentation/trainer"; "Segmentation/view_tagger" -> "Segmentation/tagged_views"; "Segmentation/views" -> "Segmentation/infer"; "Segmentation/views" -> "Segmentation/trt_calibrator"; "Segmentation/views" -> "Segmentation/trt_infer"; "Segmentation/views" -> "Segmentation/view_tagger" } Integration Tool# Integration classification tool. digraph "OnlyTool: Integration" { label="OnlyTool: Integration"; rankdir="TB"; node [shape=ellipse, style=filled, color=blue, fillcolor=lightblue]; // Operator style "Integration/classifier" node [shape=ellipse, style=filled, color=red, fillcolor=pink]; // Configurator style "Integration/classifier.conf" node [shape=rect, style=filled, color=blue, fillcolor=lightblue]; // Property style node [shape=point, style=filled, color=blue, fillcolor=lightblue]; // SingleVirtualInput property style node [shape=invtriangle, style=filled, color=blue, fillcolor=lightblue]; // MultiVirtualInput property style "Integration/properties" node [shape=rect, style=dashed, color=blue, fillcolor=default]; // Output property style "Integration/integration_class" node [shape=rect, style=filled, color=red, fillcolor=pink]; // Parameter style "Integration/classifier.args" node [shape=point, style=filled, color=red, fillcolor=pink]; // SingleVirtualInput parameter style node [shape=invtriangle, style=filled, color=red, fillcolor=pink]; // MultiVirtualInput parameter style node [shape=rect, style=dashed, color=red, fillcolor=default]; // Output parameter style subgraph "cluster_Integration" { label="Integration"; "Integration/classifier" [label="id: Integration/classifier\ltype: visionflow::opers::IntegrationClassifier\lupdate: 1970-01-01 08:00:00\ldocs: Classifier operator for integration \ltool.\l"]; "Integration/classifier.args" [label="id: Integration/classifier.args\ltype: visionflow::param::IntegrationClassifyParameter\lupdate: 1970-01-01 08:00:00\l"]; "Integration/classifier.conf" [label="id: Integration/classifier.conf\ltype: visionflow::confs::IntegrationClassifierConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to config the integration \lclassifier.\l"]; "Integration/integration_class" [label="id: Integration/integration_class\ltype: visionflow::props::StringMessage\lupdate: 1970-01-01 08:00:00\ldocs: Properties for string message.\l"] } "Integration/classifier" -> "Integration/integration_class"; "Integration/classifier.args" -> "Integration/classifier"; "Integration/classifier.conf" -> "Integration/classifier.args"; "Integration/properties" -> "Integration/classifier" } UnsuperClassification Tool# Unsuper Classification Tool digraph "OnlyTool: UnsuperClassification" { label="OnlyTool: UnsuperClassification"; rankdir="TB"; node [shape=ellipse, style=filled, color=blue, fillcolor=lightblue]; // Operator style "UnsuperClassification/comparator"; "UnsuperClassification/infer"; "UnsuperClassification/label_oper"; "UnsuperClassification/trt_infer"; "UnsuperClassification/view_tagger" node [shape=ellipse, style=filled, color=red, fillcolor=pink]; // Configurator style "UnsuperClassification/base_color_conf"; "UnsuperClassification/batch_size_conf"; "UnsuperClassification/image_mean_conf"; "UnsuperClassification/infer_param_conf"; "UnsuperClassification/label_oper.conf"; "UnsuperClassification/statistician"; "UnsuperClassification/trainer"; "UnsuperClassification/trainer.conf"; "UnsuperClassification/trt_calibrator"; "UnsuperClassification/trt_calibrator.conf"; "UnsuperClassification/trt_float_converter"; "UnsuperClassification/trt_int8_converter" node [shape=rect, style=filled, color=blue, fillcolor=lightblue]; // Property style "UnsuperClassification/feature_map"; "UnsuperClassification/hard_case"; "UnsuperClassification/match_result"; "UnsuperClassification/tagged_polygons"; "UnsuperClassification/tagged_views"; "UnsuperClassification/truth" node [shape=point, style=filled, color=blue, fillcolor=lightblue]; // SingleVirtualInput property style "UnsuperClassification/image"; "UnsuperClassification/views" node [shape=invtriangle, style=filled, color=blue, fillcolor=lightblue]; // MultiVirtualInput property style node [shape=rect, style=dashed, color=blue, fillcolor=default]; // Output property style "UnsuperClassification/pred" node [shape=rect, style=filled, color=red, fillcolor=pink]; // Parameter style "UnsuperClassification/base_color"; "UnsuperClassification/batch_size"; "UnsuperClassification/image_mean"; "UnsuperClassification/infer_param"; "UnsuperClassification/label_oper.args"; "UnsuperClassification/model"; "UnsuperClassification/statistics"; "UnsuperClassification/trainer.args"; "UnsuperClassification/training_log"; "UnsuperClassification/trt_calib_result"; "UnsuperClassification/trt_calibrator.args"; "UnsuperClassification/trt_model" node [shape=point, style=filled, color=red, fillcolor=pink]; // SingleVirtualInput parameter style node [shape=invtriangle, style=filled, color=red, fillcolor=pink]; // MultiVirtualInput parameter style node [shape=rect, style=dashed, color=red, fillcolor=default]; // Output parameter style subgraph "cluster_UnsuperClassification" { label="UnsuperClassification"; "UnsuperClassification/base_color" [label="id: UnsuperClassification/base_color\ltype: visionflow::param::BaseColor\lupdate: 1970-01-01 08:00:00\l"]; "UnsuperClassification/base_color_conf" [label="id: UnsuperClassification/base_color_conf\ltype: visionflow::confs::BaseColorConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to config input images' \lbase color.\l"]; "UnsuperClassification/batch_size" [label="id: UnsuperClassification/batch_size\ltype: visionflow::param::InferenceBatchSize\lupdate: 1970-01-01 08:00:00\ldocs: Inference BatchSize, Currently \lonly contains batch size. It may \lneed to be refactored in the future.\l"]; "UnsuperClassification/batch_size_conf" [label="id: UnsuperClassification/batch_size_conf\ltype: visionflow::confs::InferenceBatchSizeConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to set inference \lbatch size.\l"]; "UnsuperClassification/comparator" [label="id: UnsuperClassification/comparator\ltype: visionflow::opers::RegionsMatcher\lupdate: 1970-01-01 08:00:00\ldocs: Operator to compare the predicted \lregions with the ground truth \lto get the category (in [TP, FP, \lTN, FN]) of each region.\l"]; "UnsuperClassification/feature_map" [label="id: UnsuperClassification/feature_map\ltype: visionflow::props::FeatureMap\lupdate: 1970-01-01 08:00:00\ldocs: A data structure used to store \lfeature maps detected by each \lalgorithm module.\l"]; "UnsuperClassification/hard_case" [label="id: UnsuperClassification/hard_case\ltype: visionflow::props::PolygonRegionList\lupdate: 1970-01-01 08:00:00\ldocs: List structure to manage polygon \lregions.\l"]; "UnsuperClassification/image_mean" [label="id: UnsuperClassification/image_mean\ltype: visionflow::param::ImageMean\lupdate: 1970-01-01 08:00:00\ldocs: Image mean parameters\l"]; "UnsuperClassification/image_mean_conf" [label="id: UnsuperClassification/image_mean_conf\ltype: visionflow::confs::ImageMeanConf\lupdate: 1970-01-01 08:00:00\ldocs: ImageMeanConf Configurator class \lto compute the image mean values \lin the views.\l"]; "UnsuperClassification/infer" [label="id: UnsuperClassification/infer\ltype: visionflow::opers::UnsuperClassificationInfer\lupdate: 1970-01-01 08:00:00\ldocs: Unsuper Segmentation Caffe inference \lengine.\l"]; "UnsuperClassification/infer_param" [label="id: UnsuperClassification/infer_param\ltype: visionflow::param::UnsuperClassificationInferingParameters\lupdate: 1970-01-01 08:00:00\ldocs: Parameters using for unsuper classification \linference\l"]; "UnsuperClassification/infer_param_conf" [label="id: UnsuperClassification/infer_param_conf\ltype: visionflow::confs::UnsuperClassificationInferConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to set unsuper classification \linfer param\l"]; "UnsuperClassification/label_oper" [label="id: UnsuperClassification/label_oper\ltype: visionflow::opers::UnsuperClassificationLabeler\lupdate: 1970-01-01 08:00:00\ldocs: Annotate operator for unsuper \lclassification tool.\l"]; "UnsuperClassification/label_oper.args" [label="id: UnsuperClassification/label_oper.args\ltype: visionflow::param::BinaryPacks\lupdate: 1970-01-01 08:00:00\ldocs: A container to manage list of \lbinary datas.\l"]; "UnsuperClassification/label_oper.conf" [label="id: UnsuperClassification/label_oper.conf\ltype: visionflow::confs::CustomConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator class to generate \lany user-defined parameters\l"]; "UnsuperClassification/match_result" [label="id: UnsuperClassification/match_result\ltype: visionflow::props::RegionMatchResultList\lupdate: 1970-01-01 08:00:00\ldocs: A data structure to store list \lof RegionMatchResult.\l"]; "UnsuperClassification/model" [label="id: UnsuperClassification/model\ltype: visionflow::param::BinaryPacks\lupdate: 1970-01-01 08:00:00\ldocs: A container to manage list of \lbinary datas.\l"]; "UnsuperClassification/pred" [label="id: UnsuperClassification/pred\ltype: visionflow::props::PolygonRegionList\lupdate: 1970-01-01 08:00:00\ldocs: List structure to manage polygon \lregions.\l"]; "UnsuperClassification/statistician" [label="id: UnsuperClassification/statistician\ltype: visionflow::confs::ClassificationRegionMatchResultCounter\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to count classification \lregion match results.\l"]; "UnsuperClassification/statistics" [label="id: UnsuperClassification/statistics\ltype: visionflow::param::ModelEvaluationMetrics\lupdate: 1970-01-01 08:00:00\ldocs: A container to manage table.\l"]; "UnsuperClassification/tagged_polygons" [label="id: UnsuperClassification/tagged_polygons\ltype: visionflow::props::TaggedPolygonList\lupdate: 1970-01-01 08:00:00\ldocs: Property TaggedPolygonList implementation.\l"]; "UnsuperClassification/tagged_views" [label="id: UnsuperClassification/tagged_views\ltype: visionflow::props::ViewList\lupdate: 1970-01-01 08:00:00\ldocs: Property ViewList implementation.\l"]; "UnsuperClassification/trainer" [label="id: UnsuperClassification/trainer\ltype: visionflow::confs::UnsuperClassificationTrainer\lupdate: 1970-01-01 08:00:00\ldocs: Model trainer for Unsuper Classification \lTool.\l"]; "UnsuperClassification/trainer.args" [label="id: UnsuperClassification/trainer.args\ltype: visionflow::param::UnsuperClassificationTrainingParameters\lupdate: 1970-01-01 08:00:00\ldocs: Unsuper Classification Training \lParameters Group.\l"]; "UnsuperClassification/trainer.conf" [label="id: UnsuperClassification/trainer.conf\ltype: visionflow::confs::UnsuperClassificationTrainerConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to set unsuper classification \ltrainer options.\l"]; "UnsuperClassification/training_log" [label="id: UnsuperClassification/training_log\ltype: visionflow::param::TrainingLog\lupdate: 1970-01-01 08:00:00\l"]; "UnsuperClassification/trt_calib_result" [label="id: UnsuperClassification/trt_calib_result\ltype: visionflow::param::BinaryPacks\lupdate: 1970-01-01 08:00:00\ldocs: A container to manage list of \lbinary datas.\l"]; "UnsuperClassification/trt_calibrator" [label="id: UnsuperClassification/trt_calibrator\ltype: visionflow::confs::TRTCalibrator\lupdate: 1970-01-01 08:00:00\ldocs: TensorRT Int8 Calibrator.\l"]; "UnsuperClassification/trt_calibrator.args" [label="id: UnsuperClassification/trt_calibrator.args\ltype: visionflow::param::TRTCalibParameters\lupdate: 1970-01-01 08:00:00\ldocs: TensorRT Int8 Calibrator Parameters\l"]; "UnsuperClassification/trt_calibrator.conf" [label="id: UnsuperClassification/trt_calibrator.conf\ltype: visionflow::confs::TRTCalibratorConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to config TensorRT \lInt8 Calibrator.\l"]; "UnsuperClassification/trt_float_converter" [label="id: UnsuperClassification/trt_float_converter\ltype: visionflow::confs::TRTFloatConverter\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to convert model \linto TensortRT float model.\l"]; "UnsuperClassification/trt_infer" [label="id: UnsuperClassification/trt_infer\ltype: visionflow::opers::UnsuperClassificationTRTInfer\lupdate: 1970-01-01 08:00:00\ldocs: Unsuper Classification TensorRT \linference engine.\l"]; "UnsuperClassification/trt_int8_converter" [label="id: UnsuperClassification/trt_int8_converter\ltype: visionflow::confs::TRTInt8Converter\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to convert model \linto TensortRT int8 model.\l"]; "UnsuperClassification/trt_model" [label="id: UnsuperClassification/trt_model\ltype: visionflow::param::BinaryPacks\lupdate: 1970-01-01 08:00:00\ldocs: A container to manage list of \lbinary datas.\l"]; "UnsuperClassification/truth" [label="id: UnsuperClassification/truth\ltype: visionflow::props::PolygonRegionList\lupdate: 1970-01-01 08:00:00\ldocs: List structure to manage polygon \lregions.\l"]; "UnsuperClassification/view_tagger" [label="id: UnsuperClassification/view_tagger\ltype: visionflow::opers::ViewTagger\lupdate: 1970-01-01 08:00:00\ldocs: Operator used to tag the views \lwith some already tagged polygons \lautomatically. The spilt_tag and \ltags of the most matched tagged_polygon \lselected using CIou will be set \lto the view, otherwise, the view \lwill remain its original spilt_tag \land tags info.\l"] } "UnsuperClassification/base_color" -> "UnsuperClassification/image_mean_conf"; "UnsuperClassification/base_color" -> "UnsuperClassification/trainer"; "UnsuperClassification/base_color_conf" -> "UnsuperClassification/base_color"; "UnsuperClassification/batch_size" -> "UnsuperClassification/infer"; "UnsuperClassification/batch_size" -> "UnsuperClassification/trt_float_converter"; "UnsuperClassification/batch_size" -> "UnsuperClassification/trt_int8_converter"; "UnsuperClassification/batch_size_conf" -> "UnsuperClassification/batch_size"; "UnsuperClassification/comparator" -> "UnsuperClassification/match_result"; "UnsuperClassification/hard_case" -> "UnsuperClassification/trainer"; "UnsuperClassification/image" -> "UnsuperClassification/image_mean_conf"; "UnsuperClassification/image" -> "UnsuperClassification/infer"; "UnsuperClassification/image" -> "UnsuperClassification/label_oper"; "UnsuperClassification/image" -> "UnsuperClassification/trainer"; "UnsuperClassification/image" -> "UnsuperClassification/trt_calibrator"; "UnsuperClassification/image" -> "UnsuperClassification/trt_infer"; "UnsuperClassification/image_mean" -> "UnsuperClassification/trainer"; "UnsuperClassification/image_mean_conf" -> "UnsuperClassification/image_mean"; "UnsuperClassification/infer" -> "UnsuperClassification/feature_map"; "UnsuperClassification/infer" -> "UnsuperClassification/pred"; "UnsuperClassification/infer_param" -> "UnsuperClassification/infer"; "UnsuperClassification/infer_param_conf" -> "UnsuperClassification/infer_param"; "UnsuperClassification/label_oper" -> "UnsuperClassification/hard_case"; "UnsuperClassification/label_oper" -> "UnsuperClassification/tagged_polygons"; "UnsuperClassification/label_oper" -> "UnsuperClassification/truth"; "UnsuperClassification/label_oper.args" -> "UnsuperClassification/label_oper"; "UnsuperClassification/label_oper.conf" -> "UnsuperClassification/label_oper.args"; "UnsuperClassification/model" -> "UnsuperClassification/infer"; "UnsuperClassification/model" -> "UnsuperClassification/trt_calibrator"; "UnsuperClassification/model" -> "UnsuperClassification/trt_float_converter"; "UnsuperClassification/model" -> "UnsuperClassification/trt_int8_converter"; "UnsuperClassification/pred" -> "UnsuperClassification/comparator"; "UnsuperClassification/pred" -> "UnsuperClassification/statistician"; "UnsuperClassification/statistician" -> "UnsuperClassification/statistics"; "UnsuperClassification/tagged_polygons" -> "UnsuperClassification/view_tagger"; "UnsuperClassification/tagged_views" -> "UnsuperClassification/comparator"; "UnsuperClassification/tagged_views" -> "UnsuperClassification/image_mean_conf"; "UnsuperClassification/tagged_views" -> "UnsuperClassification/statistician"; "UnsuperClassification/tagged_views" -> "UnsuperClassification/trainer"; "UnsuperClassification/trainer" -> "UnsuperClassification/model"; "UnsuperClassification/trainer" -> "UnsuperClassification/training_log"; "UnsuperClassification/trainer.args" -> "UnsuperClassification/trainer"; "UnsuperClassification/trainer.conf" -> "UnsuperClassification/trainer.args"; "UnsuperClassification/trt_calib_result" -> "UnsuperClassification/trt_int8_converter"; "UnsuperClassification/trt_calibrator" -> "UnsuperClassification/trt_calib_result"; "UnsuperClassification/trt_calibrator.args" -> "UnsuperClassification/trt_calibrator"; "UnsuperClassification/trt_calibrator.conf" -> "UnsuperClassification/trt_calibrator.args"; "UnsuperClassification/trt_float_converter" -> "UnsuperClassification/trt_model"; "UnsuperClassification/trt_infer" -> "UnsuperClassification/feature_map"; "UnsuperClassification/trt_infer" -> "UnsuperClassification/pred"; "UnsuperClassification/trt_int8_converter" -> "UnsuperClassification/trt_model"; "UnsuperClassification/trt_model" -> "UnsuperClassification/trt_infer"; "UnsuperClassification/truth" -> "UnsuperClassification/comparator"; "UnsuperClassification/truth" -> "UnsuperClassification/statistician"; "UnsuperClassification/truth" -> "UnsuperClassification/trainer"; "UnsuperClassification/view_tagger" -> "UnsuperClassification/tagged_views"; "UnsuperClassification/views" -> "UnsuperClassification/infer"; "UnsuperClassification/views" -> "UnsuperClassification/trt_calibrator"; "UnsuperClassification/views" -> "UnsuperClassification/trt_infer"; "UnsuperClassification/views" -> "UnsuperClassification/view_tagger" } AssemblyVerification Tool# AssemblyVerification Tool. digraph "OnlyTool: AssemblyVerification" { label="OnlyTool: AssemblyVerification"; rankdir="TB"; node [shape=ellipse, style=filled, color=blue, fillcolor=lightblue]; // Operator style "AssemblyVerification/comparator"; "AssemblyVerification/filter"; "AssemblyVerification/infer"; "AssemblyVerification/label_oper"; "AssemblyVerification/prediction_objects_matcher"; "AssemblyVerification/truth_objects_matcher"; "AssemblyVerification/view_tagger" node [shape=ellipse, style=filled, color=red, fillcolor=pink]; // Configurator style "AssemblyVerification/base_color_conf"; "AssemblyVerification/batch_size_conf"; "AssemblyVerification/filter.conf"; "AssemblyVerification/image_mean_conf"; "AssemblyVerification/label_classes.conf"; "AssemblyVerification/label_oper.conf"; "AssemblyVerification/objects_statistician"; "AssemblyVerification/statistician"; "AssemblyVerification/templates_conf"; "AssemblyVerification/trainer"; "AssemblyVerification/trainer.conf" node [shape=rect, style=filled, color=blue, fillcolor=lightblue]; // Property style "AssemblyVerification/feature_map"; "AssemblyVerification/mask"; "AssemblyVerification/match_result"; "AssemblyVerification/tagged_polygons"; "AssemblyVerification/tagged_views"; "AssemblyVerification/truth"; "AssemblyVerification/truth.objects" node [shape=point, style=filled, color=blue, fillcolor=lightblue]; // SingleVirtualInput property style "AssemblyVerification/image"; "AssemblyVerification/views" node [shape=invtriangle, style=filled, color=blue, fillcolor=lightblue]; // MultiVirtualInput property style node [shape=rect, style=dashed, color=blue, fillcolor=default]; // Output property style "AssemblyVerification/pred.keypoints"; "AssemblyVerification/pred.objects" node [shape=rect, style=filled, color=red, fillcolor=pink]; // Parameter style "AssemblyVerification/base_color"; "AssemblyVerification/batch_size"; "AssemblyVerification/filter.args"; "AssemblyVerification/image_mean"; "AssemblyVerification/label_oper.args"; "AssemblyVerification/model"; "AssemblyVerification/objescts_statistics"; "AssemblyVerification/statistics"; "AssemblyVerification/templates"; "AssemblyVerification/trainer.args"; "AssemblyVerification/training_log" node [shape=point, style=filled, color=red, fillcolor=pink]; // SingleVirtualInput parameter style node [shape=invtriangle, style=filled, color=red, fillcolor=pink]; // MultiVirtualInput parameter style node [shape=rect, style=dashed, color=red, fillcolor=default]; // Output parameter style "AssemblyVerification/classes" subgraph "cluster_AssemblyVerification" { label="AssemblyVerification"; "AssemblyVerification/base_color" [label="id: AssemblyVerification/base_color\ltype: visionflow::param::BaseColor\lupdate: 1970-01-01 08:00:00\l"]; "AssemblyVerification/base_color_conf" [label="id: AssemblyVerification/base_color_conf\ltype: visionflow::confs::BaseColorConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to config input images' \lbase color.\l"]; "AssemblyVerification/batch_size" [label="id: AssemblyVerification/batch_size\ltype: visionflow::param::InferenceBatchSize\lupdate: 1970-01-01 08:00:00\ldocs: Inference BatchSize, Currently \lonly contains batch size. It may \lneed to be refactored in the future.\l"]; "AssemblyVerification/batch_size_conf" [label="id: AssemblyVerification/batch_size_conf\ltype: visionflow::confs::InferenceBatchSizeConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to set inference \lbatch size.\l"]; "AssemblyVerification/classes" [label="id: AssemblyVerification/classes\ltype: visionflow::param::LabelClasses\lupdate: 1970-01-01 08:00:00\ldocs: A container to manage label classes.\l"]; "AssemblyVerification/comparator" [label="id: AssemblyVerification/comparator\ltype: visionflow::opers::RegionsMatcher\lupdate: 1970-01-01 08:00:00\ldocs: Operator to compare the predicted \lregions with the ground truth \lto get the category (in [TP, FP, \lTN, FN]) of each region.\l"]; "AssemblyVerification/feature_map" [label="id: AssemblyVerification/feature_map\ltype: visionflow::props::FeatureMap\lupdate: 1970-01-01 08:00:00\ldocs: A data structure used to store \lfeature maps detected by each \lalgorithm module.\l"]; "AssemblyVerification/filter" [label="id: AssemblyVerification/filter\ltype: visionflow::opers::AssemblyVerificationFilter\lupdate: 1970-01-01 08:00:00\ldocs: AssemblyVerification feature map \lfilter.\l"]; "AssemblyVerification/filter.args" [label="id: AssemblyVerification/filter.args\ltype: visionflow::param::AssemblyVerificationFilterParameters\lupdate: 1970-01-01 08:00:00\l"]; "AssemblyVerification/filter.conf" [label="id: AssemblyVerification/filter.conf\ltype: visionflow::confs::AssemblyVerificationFeatureFilterConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator UI to config AssemblyVerification \lnms and filter parameters.\l"]; "AssemblyVerification/image_mean" [label="id: AssemblyVerification/image_mean\ltype: visionflow::param::ImageMean\lupdate: 1970-01-01 08:00:00\ldocs: Image mean parameters\l"]; "AssemblyVerification/image_mean_conf" [label="id: AssemblyVerification/image_mean_conf\ltype: visionflow::confs::ImageMeanConf\lupdate: 1970-01-01 08:00:00\ldocs: ImageMeanConf Configurator class \lto compute the image mean values \lin the views.\l"]; "AssemblyVerification/infer" [label="id: AssemblyVerification/infer\ltype: visionflow::opers::AssemblyVerificationInfer\lupdate: 1970-01-01 08:00:00\ldocs: AssemblyVerification Caffe inference \lengine.\l"]; "AssemblyVerification/label_classes.conf" [label="id: AssemblyVerification/label_classes.conf\ltype: visionflow::confs::LabelClassesConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator class to generate \llabel classes parameter.\l"]; "AssemblyVerification/label_oper" [label="id: AssemblyVerification/label_oper\ltype: visionflow::opers::AssemblyVerificationLabeler\lupdate: 1970-01-01 08:00:00\ldocs: Annotate operator for AssemblyVerification \ltool.\l"]; "AssemblyVerification/label_oper.args" [label="id: AssemblyVerification/label_oper.args\ltype: visionflow::param::BinaryPacks\lupdate: 1970-01-01 08:00:00\ldocs: A container to manage list of \lbinary datas.\l"]; "AssemblyVerification/label_oper.conf" [label="id: AssemblyVerification/label_oper.conf\ltype: visionflow::confs::CustomConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator class to generate \lany user-defined parameters\l"]; "AssemblyVerification/mask" [label="id: AssemblyVerification/mask\ltype: visionflow::props::PolygonRegionList\lupdate: 1970-01-01 08:00:00\ldocs: List structure to manage polygon \lregions.\l"]; "AssemblyVerification/match_result" [label="id: AssemblyVerification/match_result\ltype: visionflow::props::RegionMatchResultList\lupdate: 1970-01-01 08:00:00\ldocs: A data structure to store list \lof RegionMatchResult.\l"]; "AssemblyVerification/model" [label="id: AssemblyVerification/model\ltype: visionflow::param::BinaryPacks\lupdate: 1970-01-01 08:00:00\ldocs: A container to manage list of \lbinary datas.\l"]; "AssemblyVerification/objects_statistician" [label="id: AssemblyVerification/objects_statistician\ltype: visionflow::confs::RegionMatchResultCounter\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to count region match \lresults.\l"]; "AssemblyVerification/objescts_statistics" [label="id: AssemblyVerification/objescts_statistics\ltype: visionflow::param::ModelEvaluationMetrics\lupdate: 1970-01-01 08:00:00\ldocs: A container to manage table.\l"]; "AssemblyVerification/pred.keypoints" [label="id: AssemblyVerification/pred.keypoints\ltype: visionflow::props::PolygonRegionList\lupdate: 1970-01-01 08:00:00\ldocs: List structure to manage polygon \lregions.\l"]; "AssemblyVerification/pred.objects" [label="id: AssemblyVerification/pred.objects\ltype: visionflow::props::PolygonWithStringMapRegionList\lupdate: 1970-01-01 08:00:00\ldocs: List structure to manage polygon \lregions.\l"]; "AssemblyVerification/prediction_objects_matcher" [label="id: AssemblyVerification/prediction_objects_matcher\ltype: visionflow::opers::AssemblyVerificationObjectMatcher\lupdate: 1970-01-01 08:00:00\ldocs: AssemblyVerification object matcher.\l"]; "AssemblyVerification/statistician" [label="id: AssemblyVerification/statistician\ltype: visionflow::confs::RegionMatchResultCounter\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to count region match \lresults.\l"]; "AssemblyVerification/statistics" [label="id: AssemblyVerification/statistics\ltype: visionflow::param::ModelEvaluationMetrics\lupdate: 1970-01-01 08:00:00\ldocs: A container to manage table.\l"]; "AssemblyVerification/tagged_polygons" [label="id: AssemblyVerification/tagged_polygons\ltype: visionflow::props::TaggedPolygonList\lupdate: 1970-01-01 08:00:00\ldocs: Property TaggedPolygonList implementation.\l"]; "AssemblyVerification/tagged_views" [label="id: AssemblyVerification/tagged_views\ltype: visionflow::props::ViewList\lupdate: 1970-01-01 08:00:00\ldocs: Property ViewList implementation.\l"]; "AssemblyVerification/templates" [label="id: AssemblyVerification/templates\ltype: visionflow::param::AssemblyVerificationTemplates\lupdate: 1970-01-01 08:00:00\l"]; "AssemblyVerification/templates_conf" [label="id: AssemblyVerification/templates_conf\ltype: visionflow::confs::AssemblyVerificationTemplateConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator UI to config AssemblyVerification \lmatch templates.\l"]; "AssemblyVerification/trainer" [label="id: AssemblyVerification/trainer\ltype: visionflow::confs::AssemblyVerificationTrainer\lupdate: 1970-01-01 08:00:00\ldocs: AssemblyVerification model trainer.\l"]; "AssemblyVerification/trainer.args" [label="id: AssemblyVerification/trainer.args\ltype: visionflow::param::AssemblyVerificationTrainingParameters\lupdate: 1970-01-01 08:00:00\l"]; "AssemblyVerification/trainer.conf" [label="id: AssemblyVerification/trainer.conf\ltype: visionflow::confs::AssemblyVerificationTrainerConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to set AssemblyVerification \ltrainer options.\l"]; "AssemblyVerification/training_log" [label="id: AssemblyVerification/training_log\ltype: visionflow::param::TrainingLog\lupdate: 1970-01-01 08:00:00\l"]; "AssemblyVerification/truth" [label="id: AssemblyVerification/truth\ltype: visionflow::props::PolygonRegionList\lupdate: 1970-01-01 08:00:00\ldocs: List structure to manage polygon \lregions.\l"]; "AssemblyVerification/truth.objects" [label="id: AssemblyVerification/truth.objects\ltype: visionflow::props::PolygonWithStringMapRegionList\lupdate: 1970-01-01 08:00:00\ldocs: List structure to manage polygon \lregions.\l"]; "AssemblyVerification/truth_objects_matcher" [label="id: AssemblyVerification/truth_objects_matcher\ltype: visionflow::opers::AssemblyVerificationObjectMatcher\lupdate: 1970-01-01 08:00:00\ldocs: AssemblyVerification object matcher.\l"]; "AssemblyVerification/view_tagger" [label="id: AssemblyVerification/view_tagger\ltype: visionflow::opers::ViewTagger\lupdate: 1970-01-01 08:00:00\ldocs: Operator used to tag the views \lwith some already tagged polygons \lautomatically. The spilt_tag and \ltags of the most matched tagged_polygon \lselected using CIou will be set \lto the view, otherwise, the view \lwill remain its original spilt_tag \land tags info.\l"] } "AssemblyVerification/base_color" -> "AssemblyVerification/image_mean_conf"; "AssemblyVerification/base_color" -> "AssemblyVerification/trainer"; "AssemblyVerification/base_color_conf" -> "AssemblyVerification/base_color"; "AssemblyVerification/batch_size" -> "AssemblyVerification/infer"; "AssemblyVerification/batch_size_conf" -> "AssemblyVerification/batch_size"; "AssemblyVerification/classes" -> "AssemblyVerification/label_oper"; "AssemblyVerification/classes" -> "AssemblyVerification/trainer"; "AssemblyVerification/comparator" -> "AssemblyVerification/match_result"; "AssemblyVerification/feature_map" -> "AssemblyVerification/filter"; "AssemblyVerification/filter" -> "AssemblyVerification/pred.keypoints"; "AssemblyVerification/filter.args" -> "AssemblyVerification/filter"; "AssemblyVerification/filter.conf" -> "AssemblyVerification/filter.args"; "AssemblyVerification/image" -> "AssemblyVerification/image_mean_conf"; "AssemblyVerification/image" -> "AssemblyVerification/infer"; "AssemblyVerification/image" -> "AssemblyVerification/label_oper"; "AssemblyVerification/image" -> "AssemblyVerification/trainer"; "AssemblyVerification/image_mean" -> "AssemblyVerification/trainer"; "AssemblyVerification/image_mean_conf" -> "AssemblyVerification/image_mean"; "AssemblyVerification/infer" -> "AssemblyVerification/feature_map"; "AssemblyVerification/label_classes.conf" -> "AssemblyVerification/classes"; "AssemblyVerification/label_oper" -> "AssemblyVerification/mask"; "AssemblyVerification/label_oper" -> "AssemblyVerification/tagged_polygons"; "AssemblyVerification/label_oper" -> "AssemblyVerification/truth"; "AssemblyVerification/label_oper.args" -> "AssemblyVerification/label_oper"; "AssemblyVerification/label_oper.conf" -> "AssemblyVerification/label_oper.args"; "AssemblyVerification/mask" -> "AssemblyVerification/trainer"; "AssemblyVerification/model" -> "AssemblyVerification/filter"; "AssemblyVerification/model" -> "AssemblyVerification/infer"; "AssemblyVerification/objects_statistician" -> "AssemblyVerification/objescts_statistics"; "AssemblyVerification/pred.keypoints" -> "AssemblyVerification/comparator"; "AssemblyVerification/pred.keypoints" -> "AssemblyVerification/prediction_objects_matcher"; "AssemblyVerification/pred.keypoints" -> "AssemblyVerification/statistician"; "AssemblyVerification/pred.objects" -> "AssemblyVerification/objects_statistician"; "AssemblyVerification/prediction_objects_matcher" -> "AssemblyVerification/pred.objects"; "AssemblyVerification/statistician" -> "AssemblyVerification/statistics"; "AssemblyVerification/tagged_polygons" -> "AssemblyVerification/view_tagger"; "AssemblyVerification/tagged_views" -> "AssemblyVerification/comparator"; "AssemblyVerification/tagged_views" -> "AssemblyVerification/image_mean_conf"; "AssemblyVerification/tagged_views" -> "AssemblyVerification/objects_statistician"; "AssemblyVerification/tagged_views" -> "AssemblyVerification/statistician"; "AssemblyVerification/tagged_views" -> "AssemblyVerification/trainer"; "AssemblyVerification/tagged_views" -> "AssemblyVerification/truth_objects_matcher"; "AssemblyVerification/templates" -> "AssemblyVerification/prediction_objects_matcher"; "AssemblyVerification/templates" -> "AssemblyVerification/truth_objects_matcher"; "AssemblyVerification/templates_conf" -> "AssemblyVerification/templates"; "AssemblyVerification/trainer" -> "AssemblyVerification/model"; "AssemblyVerification/trainer" -> "AssemblyVerification/training_log"; "AssemblyVerification/trainer.args" -> "AssemblyVerification/trainer"; "AssemblyVerification/trainer.conf" -> "AssemblyVerification/trainer.args"; "AssemblyVerification/truth" -> "AssemblyVerification/comparator"; "AssemblyVerification/truth" -> "AssemblyVerification/statistician"; "AssemblyVerification/truth" -> "AssemblyVerification/trainer"; "AssemblyVerification/truth" -> "AssemblyVerification/truth_objects_matcher"; "AssemblyVerification/truth.objects" -> "AssemblyVerification/objects_statistician"; "AssemblyVerification/truth_objects_matcher" -> "AssemblyVerification/truth.objects"; "AssemblyVerification/view_tagger" -> "AssemblyVerification/tagged_views"; "AssemblyVerification/views" -> "AssemblyVerification/filter"; "AssemblyVerification/views" -> "AssemblyVerification/infer"; "AssemblyVerification/views" -> "AssemblyVerification/prediction_objects_matcher"; "AssemblyVerification/views" -> "AssemblyVerification/view_tagger" } Location Tool# Location Tool. digraph "OnlyTool: Location" { label="OnlyTool: Location"; rankdir="TB"; node [shape=ellipse, style=filled, color=blue, fillcolor=lightblue]; // Operator style "Location/comparator"; "Location/filter"; "Location/infer"; "Location/label_oper"; "Location/prediction_objects_matcher"; "Location/truth_objects_matcher"; "Location/view_tagger" node [shape=ellipse, style=filled, color=red, fillcolor=pink]; // Configurator style "Location/base_color_conf"; "Location/batch_size_conf"; "Location/filter.conf"; "Location/image_mean_conf"; "Location/label_classes.conf"; "Location/label_oper.conf"; "Location/objects_statistician"; "Location/statistician"; "Location/templates_conf"; "Location/trainer"; "Location/trainer.conf" node [shape=rect, style=filled, color=blue, fillcolor=lightblue]; // Property style "Location/feature_map"; "Location/mask"; "Location/match_result"; "Location/tagged_polygons"; "Location/tagged_views"; "Location/truth"; "Location/truth.objects" node [shape=point, style=filled, color=blue, fillcolor=lightblue]; // SingleVirtualInput property style "Location/image"; "Location/views" node [shape=invtriangle, style=filled, color=blue, fillcolor=lightblue]; // MultiVirtualInput property style node [shape=rect, style=dashed, color=blue, fillcolor=default]; // Output property style "Location/pred.keypoints"; "Location/pred.objects" node [shape=rect, style=filled, color=red, fillcolor=pink]; // Parameter style "Location/base_color"; "Location/batch_size"; "Location/filter.args"; "Location/image_mean"; "Location/label_oper.args"; "Location/model"; "Location/objescts_statistics"; "Location/statistics"; "Location/templates"; "Location/trainer.args"; "Location/training_log" node [shape=point, style=filled, color=red, fillcolor=pink]; // SingleVirtualInput parameter style node [shape=invtriangle, style=filled, color=red, fillcolor=pink]; // MultiVirtualInput parameter style node [shape=rect, style=dashed, color=red, fillcolor=default]; // Output parameter style "Location/classes" subgraph "cluster_Location" { label="Location"; "Location/base_color" [label="id: Location/base_color\ltype: visionflow::param::BaseColor\lupdate: 1970-01-01 08:00:00\l"]; "Location/base_color_conf" [label="id: Location/base_color_conf\ltype: visionflow::confs::BaseColorConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to config input images' \lbase color.\l"]; "Location/batch_size" [label="id: Location/batch_size\ltype: visionflow::param::InferenceBatchSize\lupdate: 1970-01-01 08:00:00\ldocs: Inference BatchSize, Currently \lonly contains batch size. It may \lneed to be refactored in the future.\l"]; "Location/batch_size_conf" [label="id: Location/batch_size_conf\ltype: visionflow::confs::InferenceBatchSizeConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to set inference \lbatch size.\l"]; "Location/classes" [label="id: Location/classes\ltype: visionflow::param::LabelClasses\lupdate: 1970-01-01 08:00:00\ldocs: A container to manage label classes.\l"]; "Location/comparator" [label="id: Location/comparator\ltype: visionflow::opers::RegionsMatcher\lupdate: 1970-01-01 08:00:00\ldocs: Operator to compare the predicted \lregions with the ground truth \lto get the category (in [TP, FP, \lTN, FN]) of each region.\l"]; "Location/feature_map" [label="id: Location/feature_map\ltype: visionflow::props::FeatureMap\lupdate: 1970-01-01 08:00:00\ldocs: A data structure used to store \lfeature maps detected by each \lalgorithm module.\l"]; "Location/filter" [label="id: Location/filter\ltype: visionflow::opers::LocationFilter\lupdate: 1970-01-01 08:00:00\ldocs: Location feature map filter.\l"]; "Location/filter.args" [label="id: Location/filter.args\ltype: visionflow::param::LocationFilterParameters\lupdate: 1970-01-01 08:00:00\l"]; "Location/filter.conf" [label="id: Location/filter.conf\ltype: visionflow::confs::LocationFeatureFilterConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator UI to config Location \lnms and filter parameters.\l"]; "Location/image_mean" [label="id: Location/image_mean\ltype: visionflow::param::ImageMean\lupdate: 1970-01-01 08:00:00\ldocs: Image mean parameters\l"]; "Location/image_mean_conf" [label="id: Location/image_mean_conf\ltype: visionflow::confs::ImageMeanConf\lupdate: 1970-01-01 08:00:00\ldocs: ImageMeanConf Configurator class \lto compute the image mean values \lin the views.\l"]; "Location/infer" [label="id: Location/infer\ltype: visionflow::opers::LocationInfer\lupdate: 1970-01-01 08:00:00\ldocs: Location Caffe inference engine.\l"]; "Location/label_classes.conf" [label="id: Location/label_classes.conf\ltype: visionflow::confs::LabelClassesConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator class to generate \llabel classes parameter.\l"]; "Location/label_oper" [label="id: Location/label_oper\ltype: visionflow::opers::LocationLabeler\lupdate: 1970-01-01 08:00:00\ldocs: Annotate operator for Location \ltool.\l"]; "Location/label_oper.args" [label="id: Location/label_oper.args\ltype: visionflow::param::BinaryPacks\lupdate: 1970-01-01 08:00:00\ldocs: A container to manage list of \lbinary datas.\l"]; "Location/label_oper.conf" [label="id: Location/label_oper.conf\ltype: visionflow::confs::CustomConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator class to generate \lany user-defined parameters\l"]; "Location/mask" [label="id: Location/mask\ltype: visionflow::props::PolygonRegionList\lupdate: 1970-01-01 08:00:00\ldocs: List structure to manage polygon \lregions.\l"]; "Location/match_result" [label="id: Location/match_result\ltype: visionflow::props::RegionMatchResultList\lupdate: 1970-01-01 08:00:00\ldocs: A data structure to store list \lof RegionMatchResult.\l"]; "Location/model" [label="id: Location/model\ltype: visionflow::param::BinaryPacks\lupdate: 1970-01-01 08:00:00\ldocs: A container to manage list of \lbinary datas.\l"]; "Location/objects_statistician" [label="id: Location/objects_statistician\ltype: visionflow::confs::RegionMatchResultCounter\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to count region match \lresults.\l"]; "Location/objescts_statistics" [label="id: Location/objescts_statistics\ltype: visionflow::param::ModelEvaluationMetrics\lupdate: 1970-01-01 08:00:00\ldocs: A container to manage table.\l"]; "Location/pred.keypoints" [label="id: Location/pred.keypoints\ltype: visionflow::props::PolygonRegionList\lupdate: 1970-01-01 08:00:00\ldocs: List structure to manage polygon \lregions.\l"]; "Location/pred.objects" [label="id: Location/pred.objects\ltype: visionflow::props::PolygonRegionList\lupdate: 1970-01-01 08:00:00\ldocs: List structure to manage polygon \lregions.\l"]; "Location/prediction_objects_matcher" [label="id: Location/prediction_objects_matcher\ltype: visionflow::opers::LocationObjectMatcher\lupdate: 1970-01-01 08:00:00\ldocs: Location object matcher.\l"]; "Location/statistician" [label="id: Location/statistician\ltype: visionflow::confs::LocationRegionMatchResultCounter\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to count location \lregion match results.\l"]; "Location/statistics" [label="id: Location/statistics\ltype: visionflow::param::LocationModelEvaluationMetrics\lupdate: 1970-01-01 08:00:00\ldocs: A container to manage table.\l"]; "Location/tagged_polygons" [label="id: Location/tagged_polygons\ltype: visionflow::props::TaggedPolygonList\lupdate: 1970-01-01 08:00:00\ldocs: Property TaggedPolygonList implementation.\l"]; "Location/tagged_views" [label="id: Location/tagged_views\ltype: visionflow::props::ViewList\lupdate: 1970-01-01 08:00:00\ldocs: Property ViewList implementation.\l"]; "Location/templates" [label="id: Location/templates\ltype: visionflow::param::LocationTemplates\lupdate: 1970-01-01 08:00:00\l"]; "Location/templates_conf" [label="id: Location/templates_conf\ltype: visionflow::confs::LocationTemplateConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator UI to config Location \lmatch templates.\l"]; "Location/trainer" [label="id: Location/trainer\ltype: visionflow::confs::LocationTrainer\lupdate: 1970-01-01 08:00:00\ldocs: Location model trainer.\l"]; "Location/trainer.args" [label="id: Location/trainer.args\ltype: visionflow::param::LocationTrainingParameters\lupdate: 1970-01-01 08:00:00\l"]; "Location/trainer.conf" [label="id: Location/trainer.conf\ltype: visionflow::confs::LocationTrainerConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to set Location trainer \loptions.\l"]; "Location/training_log" [label="id: Location/training_log\ltype: visionflow::param::TrainingLog\lupdate: 1970-01-01 08:00:00\l"]; "Location/truth" [label="id: Location/truth\ltype: visionflow::props::PolygonRegionList\lupdate: 1970-01-01 08:00:00\ldocs: List structure to manage polygon \lregions.\l"]; "Location/truth.objects" [label="id: Location/truth.objects\ltype: visionflow::props::PolygonRegionList\lupdate: 1970-01-01 08:00:00\ldocs: List structure to manage polygon \lregions.\l"]; "Location/truth_objects_matcher" [label="id: Location/truth_objects_matcher\ltype: visionflow::opers::LocationObjectMatcher\lupdate: 1970-01-01 08:00:00\ldocs: Location object matcher.\l"]; "Location/view_tagger" [label="id: Location/view_tagger\ltype: visionflow::opers::ViewTagger\lupdate: 1970-01-01 08:00:00\ldocs: Operator used to tag the views \lwith some already tagged polygons \lautomatically. The spilt_tag and \ltags of the most matched tagged_polygon \lselected using CIou will be set \lto the view, otherwise, the view \lwill remain its original spilt_tag \land tags info.\l"] } "Location/base_color" -> "Location/image_mean_conf"; "Location/base_color" -> "Location/trainer"; "Location/base_color_conf" -> "Location/base_color"; "Location/batch_size" -> "Location/infer"; "Location/batch_size_conf" -> "Location/batch_size"; "Location/classes" -> "Location/label_oper"; "Location/classes" -> "Location/trainer"; "Location/comparator" -> "Location/match_result"; "Location/feature_map" -> "Location/filter"; "Location/filter" -> "Location/pred.keypoints"; "Location/filter.args" -> "Location/filter"; "Location/filter.conf" -> "Location/filter.args"; "Location/image" -> "Location/image_mean_conf"; "Location/image" -> "Location/infer"; "Location/image" -> "Location/label_oper"; "Location/image" -> "Location/trainer"; "Location/image_mean" -> "Location/trainer"; "Location/image_mean_conf" -> "Location/image_mean"; "Location/infer" -> "Location/feature_map"; "Location/label_classes.conf" -> "Location/classes"; "Location/label_oper" -> "Location/mask"; "Location/label_oper" -> "Location/tagged_polygons"; "Location/label_oper" -> "Location/truth"; "Location/label_oper.args" -> "Location/label_oper"; "Location/label_oper.conf" -> "Location/label_oper.args"; "Location/mask" -> "Location/trainer"; "Location/model" -> "Location/filter"; "Location/model" -> "Location/infer"; "Location/objects_statistician" -> "Location/objescts_statistics"; "Location/pred.keypoints" -> "Location/comparator"; "Location/pred.keypoints" -> "Location/prediction_objects_matcher"; "Location/pred.keypoints" -> "Location/statistician"; "Location/pred.objects" -> "Location/objects_statistician"; "Location/prediction_objects_matcher" -> "Location/pred.objects"; "Location/statistician" -> "Location/statistics"; "Location/tagged_polygons" -> "Location/view_tagger"; "Location/tagged_views" -> "Location/comparator"; "Location/tagged_views" -> "Location/image_mean_conf"; "Location/tagged_views" -> "Location/objects_statistician"; "Location/tagged_views" -> "Location/statistician"; "Location/tagged_views" -> "Location/trainer"; "Location/tagged_views" -> "Location/truth_objects_matcher"; "Location/templates" -> "Location/prediction_objects_matcher"; "Location/templates" -> "Location/truth_objects_matcher"; "Location/templates_conf" -> "Location/templates"; "Location/trainer" -> "Location/model"; "Location/trainer" -> "Location/training_log"; "Location/trainer.args" -> "Location/trainer"; "Location/trainer.conf" -> "Location/trainer.args"; "Location/truth" -> "Location/comparator"; "Location/truth" -> "Location/statistician"; "Location/truth" -> "Location/trainer"; "Location/truth" -> "Location/truth_objects_matcher"; "Location/truth.objects" -> "Location/objects_statistician"; "Location/truth_objects_matcher" -> "Location/truth.objects"; "Location/view_tagger" -> "Location/tagged_views"; "Location/views" -> "Location/filter"; "Location/views" -> "Location/infer"; "Location/views" -> "Location/prediction_objects_matcher"; "Location/views" -> "Location/view_tagger" } Classification Tool# Classification Tool digraph "OnlyTool: Classification" { label="OnlyTool: Classification"; rankdir="TB"; node [shape=ellipse, style=filled, color=blue, fillcolor=lightblue]; // Operator style "Classification/comparator"; "Classification/infer"; "Classification/label_oper"; "Classification/view_tagger" node [shape=ellipse, style=filled, color=red, fillcolor=pink]; // Configurator style "Classification/base_color_conf"; "Classification/batch_size_conf"; "Classification/image_mean_conf"; "Classification/infer.conf"; "Classification/label_classes.conf"; "Classification/label_oper.conf"; "Classification/statistician"; "Classification/trainer"; "Classification/trainer.conf" node [shape=rect, style=filled, color=blue, fillcolor=lightblue]; // Property style "Classification/heatmap"; "Classification/mask"; "Classification/match_result"; "Classification/tagged_polygons"; "Classification/tagged_views"; "Classification/truth" node [shape=point, style=filled, color=blue, fillcolor=lightblue]; // SingleVirtualInput property style "Classification/image"; "Classification/views" node [shape=invtriangle, style=filled, color=blue, fillcolor=lightblue]; // MultiVirtualInput property style node [shape=rect, style=dashed, color=blue, fillcolor=default]; // Output property style "Classification/pred" node [shape=rect, style=filled, color=red, fillcolor=pink]; // Parameter style "Classification/base_color"; "Classification/batch_size"; "Classification/image_mean"; "Classification/infer.args"; "Classification/label_oper.args"; "Classification/model"; "Classification/statistics"; "Classification/trainer.args"; "Classification/training_log" node [shape=point, style=filled, color=red, fillcolor=pink]; // SingleVirtualInput parameter style node [shape=invtriangle, style=filled, color=red, fillcolor=pink]; // MultiVirtualInput parameter style node [shape=rect, style=dashed, color=red, fillcolor=default]; // Output parameter style "Classification/classes" subgraph "cluster_Classification" { label="Classification"; "Classification/base_color" [label="id: Classification/base_color\ltype: visionflow::param::BaseColor\lupdate: 1970-01-01 08:00:00\l"]; "Classification/base_color_conf" [label="id: Classification/base_color_conf\ltype: visionflow::confs::BaseColorConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to config input images' \lbase color.\l"]; "Classification/batch_size" [label="id: Classification/batch_size\ltype: visionflow::param::InferenceBatchSize\lupdate: 1970-01-01 08:00:00\ldocs: Inference BatchSize, Currently \lonly contains batch size. It may \lneed to be refactored in the future.\l"]; "Classification/batch_size_conf" [label="id: Classification/batch_size_conf\ltype: visionflow::confs::InferenceBatchSizeConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to set inference \lbatch size.\l"]; "Classification/classes" [label="id: Classification/classes\ltype: visionflow::param::LabelClasses\lupdate: 1970-01-01 08:00:00\ldocs: A container to manage label classes.\l"]; "Classification/comparator" [label="id: Classification/comparator\ltype: visionflow::opers::RegionsMatcher\lupdate: 1970-01-01 08:00:00\ldocs: Operator to compare the predicted \lregions with the ground truth \lto get the category (in [TP, FP, \lTN, FN]) of each region.\l"]; "Classification/heatmap" [label="id: Classification/heatmap\ltype: visionflow::props::FeatureMap\lupdate: 1970-01-01 08:00:00\ldocs: A data structure used to store \lfeature maps detected by each \lalgorithm module.\l"]; "Classification/image_mean" [label="id: Classification/image_mean\ltype: visionflow::param::ImageMean\lupdate: 1970-01-01 08:00:00\ldocs: Image mean parameters\l"]; "Classification/image_mean_conf" [label="id: Classification/image_mean_conf\ltype: visionflow::confs::ImageMeanConf\lupdate: 1970-01-01 08:00:00\ldocs: ImageMeanConf Configurator class \lto compute the image mean values \lin the views.\l"]; "Classification/infer" [label="id: Classification/infer\ltype: visionflow::opers::ClassificationInfer\lupdate: 1970-01-01 08:00:00\ldocs: Location Caffe inference engine.\l"]; "Classification/infer.args" [label="id: Classification/infer.args\ltype: visionflow::param::ClassificationInferParameters\lupdate: 1970-01-01 08:00:00\l"]; "Classification/infer.conf" [label="id: Classification/infer.conf\ltype: visionflow::confs::ClassificationInferConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to set Classification \linference parameters.\l"]; "Classification/label_classes.conf" [label="id: Classification/label_classes.conf\ltype: visionflow::confs::LabelClassesConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator class to generate \llabel classes parameter.\l"]; "Classification/label_oper" [label="id: Classification/label_oper\ltype: visionflow::opers::ClassificationLabeler\lupdate: 1970-01-01 08:00:00\ldocs: Annotate operator for Classification \ltool.\l"]; "Classification/label_oper.args" [label="id: Classification/label_oper.args\ltype: visionflow::param::BinaryPacks\lupdate: 1970-01-01 08:00:00\ldocs: A container to manage list of \lbinary datas.\l"]; "Classification/label_oper.conf" [label="id: Classification/label_oper.conf\ltype: visionflow::confs::CustomConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator class to generate \lany user-defined parameters\l"]; "Classification/mask" [label="id: Classification/mask\ltype: visionflow::props::PolygonRegionList\lupdate: 1970-01-01 08:00:00\ldocs: List structure to manage polygon \lregions.\l"]; "Classification/match_result" [label="id: Classification/match_result\ltype: visionflow::props::RegionMatchResultList\lupdate: 1970-01-01 08:00:00\ldocs: A data structure to store list \lof RegionMatchResult.\l"]; "Classification/model" [label="id: Classification/model\ltype: visionflow::param::BinaryPacks\lupdate: 1970-01-01 08:00:00\ldocs: A container to manage list of \lbinary datas.\l"]; "Classification/pred" [label="id: Classification/pred\ltype: visionflow::props::MultiNamesPolygonRegionList\lupdate: 1970-01-01 08:00:00\ldocs: List structure to manage polygon \lregions.\l"]; "Classification/statistician" [label="id: Classification/statistician\ltype: visionflow::confs::ClassificationRegionMatchResultCounter\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to count classification \lregion match results.\l"]; "Classification/statistics" [label="id: Classification/statistics\ltype: visionflow::param::ModelEvaluationMetrics\lupdate: 1970-01-01 08:00:00\ldocs: A container to manage table.\l"]; "Classification/tagged_polygons" [label="id: Classification/tagged_polygons\ltype: visionflow::props::TaggedPolygonList\lupdate: 1970-01-01 08:00:00\ldocs: Property TaggedPolygonList implementation.\l"]; "Classification/tagged_views" [label="id: Classification/tagged_views\ltype: visionflow::props::ViewList\lupdate: 1970-01-01 08:00:00\ldocs: Property ViewList implementation.\l"]; "Classification/trainer" [label="id: Classification/trainer\ltype: visionflow::confs::ClassificationTrainer\lupdate: 1970-01-01 08:00:00\ldocs: Classification model trainer.\l"]; "Classification/trainer.args" [label="id: Classification/trainer.args\ltype: visionflow::param::ClassificationTrainingParameters\lupdate: 1970-01-01 08:00:00\l"]; "Classification/trainer.conf" [label="id: Classification/trainer.conf\ltype: visionflow::confs::ClassificationTrainerConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to set Classification \ltrainer options.\l"]; "Classification/training_log" [label="id: Classification/training_log\ltype: visionflow::param::TrainingLog\lupdate: 1970-01-01 08:00:00\l"]; "Classification/truth" [label="id: Classification/truth\ltype: visionflow::props::PolygonRegionList\lupdate: 1970-01-01 08:00:00\ldocs: List structure to manage polygon \lregions.\l"]; "Classification/view_tagger" [label="id: Classification/view_tagger\ltype: visionflow::opers::ViewTagger\lupdate: 1970-01-01 08:00:00\ldocs: Operator used to tag the views \lwith some already tagged polygons \lautomatically. The spilt_tag and \ltags of the most matched tagged_polygon \lselected using CIou will be set \lto the view, otherwise, the view \lwill remain its original spilt_tag \land tags info.\l"] } "Classification/base_color" -> "Classification/image_mean_conf"; "Classification/base_color" -> "Classification/trainer"; "Classification/base_color_conf" -> "Classification/base_color"; "Classification/batch_size" -> "Classification/infer"; "Classification/batch_size_conf" -> "Classification/batch_size"; "Classification/classes" -> "Classification/label_oper"; "Classification/classes" -> "Classification/trainer"; "Classification/comparator" -> "Classification/match_result"; "Classification/image" -> "Classification/image_mean_conf"; "Classification/image" -> "Classification/infer"; "Classification/image" -> "Classification/label_oper"; "Classification/image" -> "Classification/trainer"; "Classification/image_mean" -> "Classification/trainer"; "Classification/image_mean_conf" -> "Classification/image_mean"; "Classification/infer" -> "Classification/heatmap"; "Classification/infer" -> "Classification/pred"; "Classification/infer.args" -> "Classification/infer"; "Classification/infer.conf" -> "Classification/infer.args"; "Classification/label_classes.conf" -> "Classification/classes"; "Classification/label_oper" -> "Classification/mask"; "Classification/label_oper" -> "Classification/tagged_polygons"; "Classification/label_oper" -> "Classification/truth"; "Classification/label_oper.args" -> "Classification/label_oper"; "Classification/label_oper.conf" -> "Classification/label_oper.args"; "Classification/mask" -> "Classification/trainer"; "Classification/model" -> "Classification/infer"; "Classification/pred" -> "Classification/comparator"; "Classification/pred" -> "Classification/statistician"; "Classification/statistician" -> "Classification/statistics"; "Classification/tagged_polygons" -> "Classification/view_tagger"; "Classification/tagged_views" -> "Classification/comparator"; "Classification/tagged_views" -> "Classification/image_mean_conf"; "Classification/tagged_views" -> "Classification/statistician"; "Classification/tagged_views" -> "Classification/trainer"; "Classification/trainer" -> "Classification/model"; "Classification/trainer" -> "Classification/training_log"; "Classification/trainer.args" -> "Classification/trainer"; "Classification/trainer.conf" -> "Classification/trainer.args"; "Classification/truth" -> "Classification/comparator"; "Classification/truth" -> "Classification/statistician"; "Classification/truth" -> "Classification/trainer"; "Classification/view_tagger" -> "Classification/tagged_views"; "Classification/views" -> "Classification/infer"; "Classification/views" -> "Classification/view_tagger" } Detection Tool# Detection Tool digraph "OnlyTool: Detection" { label="OnlyTool: Detection"; rankdir="TB"; node [shape=ellipse, style=filled, color=blue, fillcolor=lightblue]; // Operator style "Detection/comparator"; "Detection/infer"; "Detection/label_oper"; "Detection/view_tagger" node [shape=ellipse, style=filled, color=red, fillcolor=pink]; // Configurator style "Detection/base_color_conf"; "Detection/batch_size_conf"; "Detection/image_mean_conf"; "Detection/infer.conf"; "Detection/label_classes.conf"; "Detection/label_oper.conf"; "Detection/statistician"; "Detection/trainer"; "Detection/trainer.conf" node [shape=rect, style=filled, color=blue, fillcolor=lightblue]; // Property style "Detection/mask"; "Detection/match_result"; "Detection/tagged_polygons"; "Detection/tagged_views"; "Detection/truth" node [shape=point, style=filled, color=blue, fillcolor=lightblue]; // SingleVirtualInput property style "Detection/image"; "Detection/views" node [shape=invtriangle, style=filled, color=blue, fillcolor=lightblue]; // MultiVirtualInput property style node [shape=rect, style=dashed, color=blue, fillcolor=default]; // Output property style "Detection/pred" node [shape=rect, style=filled, color=red, fillcolor=pink]; // Parameter style "Detection/base_color"; "Detection/batch_size"; "Detection/image_mean"; "Detection/infer.args"; "Detection/label_oper.args"; "Detection/model"; "Detection/statistics"; "Detection/trainer.args"; "Detection/training_log" node [shape=point, style=filled, color=red, fillcolor=pink]; // SingleVirtualInput parameter style node [shape=invtriangle, style=filled, color=red, fillcolor=pink]; // MultiVirtualInput parameter style node [shape=rect, style=dashed, color=red, fillcolor=default]; // Output parameter style "Detection/classes" subgraph "cluster_Detection" { label="Detection"; "Detection/base_color" [label="id: Detection/base_color\ltype: visionflow::param::BaseColor\lupdate: 1970-01-01 08:00:00\l"]; "Detection/base_color_conf" [label="id: Detection/base_color_conf\ltype: visionflow::confs::BaseColorConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to config input images' \lbase color.\l"]; "Detection/batch_size" [label="id: Detection/batch_size\ltype: visionflow::param::InferenceBatchSize\lupdate: 1970-01-01 08:00:00\ldocs: Inference BatchSize, Currently \lonly contains batch size. It may \lneed to be refactored in the future.\l"]; "Detection/batch_size_conf" [label="id: Detection/batch_size_conf\ltype: visionflow::confs::InferenceBatchSizeConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to set inference \lbatch size.\l"]; "Detection/classes" [label="id: Detection/classes\ltype: visionflow::param::LabelClasses\lupdate: 1970-01-01 08:00:00\ldocs: A container to manage label classes.\l"]; "Detection/comparator" [label="id: Detection/comparator\ltype: visionflow::opers::RegionsMatcher\lupdate: 1970-01-01 08:00:00\ldocs: Operator to compare the predicted \lregions with the ground truth \lto get the category (in [TP, FP, \lTN, FN]) of each region.\l"]; "Detection/image_mean" [label="id: Detection/image_mean\ltype: visionflow::param::ImageMean\lupdate: 1970-01-01 08:00:00\ldocs: Image mean parameters\l"]; "Detection/image_mean_conf" [label="id: Detection/image_mean_conf\ltype: visionflow::confs::ImageMeanConf\lupdate: 1970-01-01 08:00:00\ldocs: ImageMeanConf Configurator class \lto compute the image mean values \lin the views.\l"]; "Detection/infer" [label="id: Detection/infer\ltype: visionflow::opers::DetectionInfer\lupdate: 1970-01-01 08:00:00\ldocs: Detection inference engine.\l"]; "Detection/infer.args" [label="id: Detection/infer.args\ltype: visionflow::param::DetectionInferParameters\lupdate: 1970-01-01 08:00:00\l"]; "Detection/infer.conf" [label="id: Detection/infer.conf\ltype: visionflow::confs::DetectionInferConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to set Detection \linference parameters.\l"]; "Detection/label_classes.conf" [label="id: Detection/label_classes.conf\ltype: visionflow::confs::LabelClassesConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator class to generate \llabel classes parameter.\l"]; "Detection/label_oper" [label="id: Detection/label_oper\ltype: visionflow::opers::DetectionLabeler\lupdate: 1970-01-01 08:00:00\ldocs: Annotate operator for Classification \ltool.\l"]; "Detection/label_oper.args" [label="id: Detection/label_oper.args\ltype: visionflow::param::BinaryPacks\lupdate: 1970-01-01 08:00:00\ldocs: A container to manage list of \lbinary datas.\l"]; "Detection/label_oper.conf" [label="id: Detection/label_oper.conf\ltype: visionflow::confs::CustomConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator class to generate \lany user-defined parameters\l"]; "Detection/mask" [label="id: Detection/mask\ltype: visionflow::props::PolygonRegionList\lupdate: 1970-01-01 08:00:00\ldocs: List structure to manage polygon \lregions.\l"]; "Detection/match_result" [label="id: Detection/match_result\ltype: visionflow::props::RegionMatchResultList\lupdate: 1970-01-01 08:00:00\ldocs: A data structure to store list \lof RegionMatchResult.\l"]; "Detection/model" [label="id: Detection/model\ltype: visionflow::param::BinaryPacks\lupdate: 1970-01-01 08:00:00\ldocs: A container to manage list of \lbinary datas.\l"]; "Detection/pred" [label="id: Detection/pred\ltype: visionflow::props::PolygonRegionList\lupdate: 1970-01-01 08:00:00\ldocs: List structure to manage polygon \lregions.\l"]; "Detection/statistician" [label="id: Detection/statistician\ltype: visionflow::confs::RegionMatchResultCounter\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to count region match \lresults.\l"]; "Detection/statistics" [label="id: Detection/statistics\ltype: visionflow::param::ModelEvaluationMetrics\lupdate: 1970-01-01 08:00:00\ldocs: A container to manage table.\l"]; "Detection/tagged_polygons" [label="id: Detection/tagged_polygons\ltype: visionflow::props::TaggedPolygonList\lupdate: 1970-01-01 08:00:00\ldocs: Property TaggedPolygonList implementation.\l"]; "Detection/tagged_views" [label="id: Detection/tagged_views\ltype: visionflow::props::ViewList\lupdate: 1970-01-01 08:00:00\ldocs: Property ViewList implementation.\l"]; "Detection/trainer" [label="id: Detection/trainer\ltype: visionflow::confs::DetectionTrainer\lupdate: 1970-01-01 08:00:00\ldocs: Detection model trainer.\l"]; "Detection/trainer.args" [label="id: Detection/trainer.args\ltype: visionflow::param::DetectionTrainingParameters\lupdate: 1970-01-01 08:00:00\l"]; "Detection/trainer.conf" [label="id: Detection/trainer.conf\ltype: visionflow::confs::DetectionTrainerConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to set Detection \ltrainer options.\l"]; "Detection/training_log" [label="id: Detection/training_log\ltype: visionflow::param::TrainingLog\lupdate: 1970-01-01 08:00:00\l"]; "Detection/truth" [label="id: Detection/truth\ltype: visionflow::props::PolygonRegionList\lupdate: 1970-01-01 08:00:00\ldocs: List structure to manage polygon \lregions.\l"]; "Detection/view_tagger" [label="id: Detection/view_tagger\ltype: visionflow::opers::ViewTagger\lupdate: 1970-01-01 08:00:00\ldocs: Operator used to tag the views \lwith some already tagged polygons \lautomatically. The spilt_tag and \ltags of the most matched tagged_polygon \lselected using CIou will be set \lto the view, otherwise, the view \lwill remain its original spilt_tag \land tags info.\l"] } "Detection/base_color" -> "Detection/image_mean_conf"; "Detection/base_color" -> "Detection/trainer"; "Detection/base_color_conf" -> "Detection/base_color"; "Detection/batch_size" -> "Detection/infer"; "Detection/batch_size_conf" -> "Detection/batch_size"; "Detection/classes" -> "Detection/label_oper"; "Detection/classes" -> "Detection/trainer"; "Detection/comparator" -> "Detection/match_result"; "Detection/image" -> "Detection/image_mean_conf"; "Detection/image" -> "Detection/infer"; "Detection/image" -> "Detection/label_oper"; "Detection/image" -> "Detection/trainer"; "Detection/image_mean" -> "Detection/trainer"; "Detection/image_mean_conf" -> "Detection/image_mean"; "Detection/infer" -> "Detection/pred"; "Detection/infer.args" -> "Detection/infer"; "Detection/infer.conf" -> "Detection/infer.args"; "Detection/label_classes.conf" -> "Detection/classes"; "Detection/label_oper" -> "Detection/mask"; "Detection/label_oper" -> "Detection/tagged_polygons"; "Detection/label_oper" -> "Detection/truth"; "Detection/label_oper.args" -> "Detection/label_oper"; "Detection/label_oper.conf" -> "Detection/label_oper.args"; "Detection/mask" -> "Detection/trainer"; "Detection/model" -> "Detection/infer"; "Detection/pred" -> "Detection/comparator"; "Detection/pred" -> "Detection/statistician"; "Detection/statistician" -> "Detection/statistics"; "Detection/tagged_polygons" -> "Detection/view_tagger"; "Detection/tagged_views" -> "Detection/comparator"; "Detection/tagged_views" -> "Detection/image_mean_conf"; "Detection/tagged_views" -> "Detection/statistician"; "Detection/tagged_views" -> "Detection/trainer"; "Detection/trainer" -> "Detection/model"; "Detection/trainer" -> "Detection/training_log"; "Detection/trainer.args" -> "Detection/trainer"; "Detection/trainer.conf" -> "Detection/trainer.args"; "Detection/truth" -> "Detection/comparator"; "Detection/truth" -> "Detection/statistician"; "Detection/truth" -> "Detection/trainer"; "Detection/view_tagger" -> "Detection/tagged_views"; "Detection/views" -> "Detection/infer"; "Detection/views" -> "Detection/view_tagger" } ViewTransformer Tool# View transformer tool. digraph "OnlyTool: ViewTransformer" { label="OnlyTool: ViewTransformer"; rankdir="TB"; node [shape=ellipse, style=filled, color=blue, fillcolor=lightblue]; // Operator style "ViewTransformer/filter"; "ViewTransformer/transformer" node [shape=ellipse, style=filled, color=red, fillcolor=pink]; // Configurator style "ViewTransformer/filter.conf"; "ViewTransformer/transformer.conf" node [shape=rect, style=filled, color=blue, fillcolor=lightblue]; // Property style "ViewTransformer/filtered_regions" node [shape=point, style=filled, color=blue, fillcolor=lightblue]; // SingleVirtualInput property style "ViewTransformer/image_info"; "ViewTransformer/input_views"; "ViewTransformer/regions" node [shape=invtriangle, style=filled, color=blue, fillcolor=lightblue]; // MultiVirtualInput property style node [shape=rect, style=dashed, color=blue, fillcolor=default]; // Output property style "ViewTransformer/transformed_views" node [shape=rect, style=filled, color=red, fillcolor=pink]; // Parameter style "ViewTransformer/filter.args"; "ViewTransformer/transformer.args" node [shape=point, style=filled, color=red, fillcolor=pink]; // SingleVirtualInput parameter style node [shape=invtriangle, style=filled, color=red, fillcolor=pink]; // MultiVirtualInput parameter style node [shape=rect, style=dashed, color=red, fillcolor=default]; // Output parameter style subgraph "cluster_ViewTransformer" { label="ViewTransformer"; "ViewTransformer/filter" [label="id: ViewTransformer/filter\ltype: visionflow::opers::ViewFilter\lupdate: 1970-01-01 08:00:00\ldocs: Region filter in each ViewTransformer \ltool.\l"]; "ViewTransformer/filter.args" [label="id: ViewTransformer/filter.args\ltype: visionflow::param::ViewFilterParameters\lupdate: 1970-01-01 08:00:00\l"]; "ViewTransformer/filter.conf" [label="id: ViewTransformer/filter.conf\ltype: visionflow::confs::ViewFilterConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to config the filter \lparameter before view transformer.\l"]; "ViewTransformer/filtered_regions" [label="id: ViewTransformer/filtered_regions\ltype: visionflow::props::PolygonRegionList\lupdate: 1970-01-01 08:00:00\ldocs: List structure to manage polygon \lregions.\l"]; "ViewTransformer/transformed_views" [label="id: ViewTransformer/transformed_views\ltype: visionflow::props::ViewList\lupdate: 1970-01-01 08:00:00\ldocs: Property ViewList implementation.\l"]; "ViewTransformer/transformer" [label="id: ViewTransformer/transformer\ltype: visionflow::opers::ViewTransformer\lupdate: 1970-01-01 08:00:00\ldocs: Operator used to transform the \lresult of the previous tool's \ldetection output with translation, \lscaling, rotation, masking and \lother transformation parameters \lto obtain new view windows that \lcan be used as input to the next \ltool.\l"]; "ViewTransformer/transformer.args" [label="id: ViewTransformer/transformer.args\ltype: visionflow::param::ViewTransformParameterList\lupdate: 1970-01-01 08:00:00\ldocs: A container to manage list of \lview transform parameters.\l"]; "ViewTransformer/transformer.conf" [label="id: ViewTransformer/transformer.conf\ltype: visionflow::confs::ViewTransformerConf\lupdate: 1970-01-01 08:00:00\ldocs: Configurator to config the view \ltransformer.\l"] } "ViewTransformer/filter" -> "ViewTransformer/filtered_regions"; "ViewTransformer/filter.args" -> "ViewTransformer/filter"; "ViewTransformer/filter.conf" -> "ViewTransformer/filter.args"; "ViewTransformer/filtered_regions" -> "ViewTransformer/transformer"; "ViewTransformer/image_info" -> "ViewTransformer/transformer"; "ViewTransformer/input_views" -> "ViewTransformer/filter"; "ViewTransformer/regions" -> "ViewTransformer/filter"; "ViewTransformer/transformer" -> "ViewTransformer/transformed_views"; "ViewTransformer/transformer.args" -> "ViewTransformer/transformer"; "ViewTransformer/transformer.conf" -> "ViewTransformer/transformer.args" }