Getting Started with VisionFlow#

We have explained how to add VisionFlow as a dependency in your project in Installing . Now, we will quickly demonstrate the usage of the main functional interfaces in VisionFlow through a simple example.

This example will include a complete workflow from creating a project, adding tools, adding data and annotations, training the model, to deploying the model. If you are not interested in how to create a project and train a model through the VisionFlow interface, but only want to know how to deploy a pre-trained model exported as a model file, then you can start reading from the Loading Exported Model section.

Initializing#

The VisionFlow library has some global settings that need to be specified at runtime and must be set before calling any other interfaces. These settings include:

  1. Logging output settings: By setting the logging output options, you can customize how VisionFlow’s logs are output, such as specifying a file path, a callback function, standard output/terminal, or MSVC debugger.

  2. Language preference: VisionFlow supports multiple languages. By default, the language preference is automatically set to match your operating system’s language during initialization. If you wish to use a different language, you can specify it through initialization parameters.

Here is an example of initializing VisionFlow:

#include <iostream>
#include "visionflow/visionflow.hpp"

namespace vflow = visionflow;

void my_logger_output(int log_level, const char *content, size_t len) {
    if (log_level > 2) {
        std::cout << std::string(content, len) << std::endl;
    }
}

int main(int /*argc*/, char ** /*argv*/) try {

    vflow::InitOptions opts;

    // Set the log output file.
    opts.logger.file_sink = "visionflow.log";
    // Set whether to output the logs to the standard output terminal.
    opts.logger.stdout_sink = true;
    // You can customize the handling of VisionFlow's log output by setting a log output callback function.
    // opts.logger.func_sink = my_logger_output;

    // Set the language to Chinese.
    opts.language = "zh_CN";

    vflow::initialize(opts);

    return 0;
} catch (const std::exception &ex) {
    std::cerr << "Unexpected Escaped Exception: " << ex.what();
    return -1;
}
To be completed
using System;
using System.Runtime.InteropServices;

public class Program
{
    public static void Main()
    {
        try
        {
            visionflow.InitOptions opts = new visionflow.InitOptions();

            // Set the log output file.
            opts.logger.file_sink = "visionflow.log";
            // Set whether to output the logs to the standard output terminal.
            opts.logger.stdout_sink = true;

            // Set the language to Chinese.
            opts.language = "zh_CN";

            visionflow_global.initialize(opts);
        }
        catch (Exception ex)
        {
            Console.WriteLine(ex);
        }
    }
}

Creating Project#

Project is the fundamental unit in VisionFlow for managing data and processing workflows. We must first create a Project before proceeding with any subsequent steps.

vflow::ProjectDescriptor desc;

desc.workspace_token = "D:/the/path/to/workspace";
desc.project_name = "my_first_project";

auto out_desc = vflow::Project::Create(desc);

std::cout << "My first VisionFlow project created at: "
          << out_desc.created_time << std::endl;
std::cout << "The VisionFlow version which the project created by: "
          << out_desc.sdk_version << std::endl;

// And then you can open the project with the desc or the output_desc.
auto project = vflow::Project::Open(desc);

std::cout << "My first VisionFlow project name is: "
          << project->descriptor().project_name << std::endl;
To be completed
To be completed

Adding Tools#

VisionFlow provides a variety of tools that you can freely combine and connect to accomplish different processing workflows. You can find all the tools provided by VisionFlow in the Tools Usage and Graph.

In this example, we will add an Input tool and a Segmentation tool, and we connect the Segmentation tool to the output of the Input tool:

std::string input_id = project->add_tool("Input");
std::string segmentation_id = project->add_tool("Segmentation");

bool connected = project->auto_connect(input_id, segmentation_id);
if (!connected) {
    std::cerr << "auto_connect failed, Please use Project::connect instead."
              << std::endl;
}
To be completed
To be completed

Importing Images#

In VisionFlow, we use SampleSet to manage the data within a project. All management of Sample data within the Project is done through SampleSet or PropertySet. A project can have multiple SampleSet, and typically, it includes a main dataset used for training and validating models. However, you can also create additional SampleSet as needed. Adding image data to a project means adding the image data to a dataset within the project. Here is a example to get the SampleSet or PropertySet from the project:

auto sample_set = project->main_sample_set();
auto input_image_set = sample_set.property_set({input_id, "image"});

// Or you can get the read only sample set and read only property set:
// auto ro_sample_set = project->readonly_main_sample_set();
// auto ro_input_image_set = sample_set.readonly_property_set({input_id, "image"});
To be completed
To be completed

There are some different way to import image into the project. If you want to import a series of image files into the project, you can use the visionflow::helper::InputHelper utility to accomplish this. Here’s an example:

const auto old_sample_num = sample_set.size();

vflow::helper::InputHelper input_helper(project.get());

for (int i = 0; i < 100; ++i) {
    input_helper.add_image("D:/path/to/image/" + std::to_string(i) + ".png");
}
// Import and commit the images.
auto receipts = input_helper.commit();

// Some image may be improted failed, you can get the reason from the receipts:
size_t success_count = 0;
for (const auto &receipt : receipts) {
    if (receipt.is_success) {
        success_count ++;
    }else {
        std::cout << "Failed to import image " << receipt.image_files[0]
                  << " as: " << receipt.error_message << std::endl;
    }
}

// And then you can get the sample from the sample set:
assert(sample_set.size() - old_sample_num == success_count);
To be completed
To be completed

In some scenarios, your image may not exist in the file system but is already loaded into memory. In such cases, you can use the function visionflow::helper::add_image_to_sample() to quickly add the image into a sample set:

// We have a image loaded into memory:
auto image = vflow::Image::FromFile("D:/path/to/image/0.png");

// Create a sample template from the sample set:
auto sample = sample_set.create_empty_sample();
// Add the the image into sample:
vflow::helper::add_image_to_sample(sample, image, input_id, true);
// Add the sample to sample set:
sample_set.add(sample);
To be completed
To be completed

Note

The documentation of the visionflow::helper::add_image_to_sample() The function provides a detailed explanation of its function, which can help you understand how it is implemented in detail. If you want to gain a deeper understanding of VisionFlow’s data management mechanism, it is strongly recommended to carefully read the documentation.

Adding Label#

Before introducing how to add annotations to a dataset, let’s briefly explain how to read and write data in PropertySet. A PropertySet is a data container that can be iteratively accessed. Each PropertySet corresponds to a data node in the processing workflow and contains the data of all samples in your dataset for that node. Taking the example of the image property set we added upon, we can access the data within it as follows (You can refer to the documentation of visionflow::data::PropertySet for more information about the interfaces related to PropertySet.):

// 获取属性集.
auto image_set = sample_set.property_set({input_id, "image"});
// 遍历属性集中的数据.
for (const auto &[sample_id, image_prop] : image_set) {
    if (image_prop) { // 如果某个样本在此数据节点上的数据不存在,则会返回空指针.
        image_prop->as<vflow::props::Image>().image().show(10);
    } else {
        auto modify_image = vflow::Image::FromFile("D:/path/to/image/0.png");
        vflow::props::Image new_prop(modify_image);
        // 更新属性集中的数据.
        image_set.update(sample_id, new_prop);
    }
}
To be completed
To be completed

Through this way, we can add all the views generated during image import to the training set (This is necessary for subsequent training, as only views added to the training set will participate in the training process.) :

auto views_set = sample_set.property_set({input_id, "views"});

for (auto [sample_id, views_prop] : views_set) {
    if (views_prop) {
        for (auto &[view_id, view] : views_prop->as<vflow::props::ViewList>()) {
            view.set_split_tag(vflow::kTrain);
        }
        views_set.update(sample_id, *views_prop);
    }
}
To be completed
To be completed

After understanding the functionality and usage of attribute sets, adding annotations is essentially updating our annotation data to the property set corresponding to the annotation data node:

auto label_set = sample_set.property_set({segmentation_id, "truth"});

for (const auto &[sample_id, old_label] : label_set) {
    // 你可以使用你自定义的函数替换下面的流程,例如:
    // auto new_label = show_and_modify_label(image_prop, old_label);
    if (old_label) {
        auto new_label = old_label->as<vflow::props::PolygonRegionList>();
        for (auto &[region_id, region] : new_label) {
            region.set_name("MyClass");
        }
        label_set.update(sample_id, new_label);
    }
}
To be completed
To be completed

Setting Training Parameters#

Different tools require different sets of parameters at different stages. For the segmentation tool in our example, before training, we need to set the following parameter groups:

// 设置标注的类别清单.
vflow::param::LabelClasses classes;
classes.add("MyClass");
project->set_param({segmentation_id, "classes"}, classes);

// 设置用于训练的图像颜色.
vflow::param::BaseColor color;
color.set_color(vflow::param::kGray);
project->set_param({segmentation_id, "base_color"}, color);

// 设置训练参数.
vflow::param::SegmentationTrainingParameters train_param;
train_param.set_epoch(10)
    .get_augmentations()
    .get_geometry_augmentation()
    .set_flip_horizontal(true)
    .set_flip_vertical(true);
project->set_param({segmentation_id, "trainer.args"}, train_param);
To be completed
To be completed

Training and Configuration Parameters#

Once all the data is prepared, training a model is very simple:

// 创建训练器的策略,你可以通过阅读有关类型的详细接口文档.
// 详细了解这些参数的作用.
vflow::runtime::StrategyOptions strategy;
strategy.allow_auto_update = true;
strategy.allow_auto_update_rely_on_prop = true;
strategy.ignore_update_time_requirement = true;
// 你可以通过设置自定义的回调函数来接收训练进度信息.
strategy.call_back = nullptr;

// 创建训练执行器.
auto trainer = project->create_config_runtime({segmentation_id, "trainer"}, strategy);

// 为训练执行器创建数据服务.
auto data_server = vflow::adapt(project.get(), trainer);

// 初始化和执行训练,训练成功后模型会自动保存到工程中.
trainer.initialize(data_server);
trainer.execute(data_server);
To be completed
To be completed

Exporting the Model#

After training the model and configuring the parameters, you may want to deploy your model to other hosts. One way to do this is by directly copying the project to another host and using the copied project for deploying your detection workflow. However, since the project contains all the data used for training and validating the model, its size can be quite large, making direct copying inconvenient. To facilitate deployment, you can export all the models and configured parameters into a standalone file as shown below:

project->export_model("D:/path/to/save/my_first_project.vfmodel");
To be completed
To be completed

If everything is successful, you should now be able to find the my_first_project.vfmodel file in the D:/path/to/save/ directory. You can copy this file to the machine where you need to deploy the detection workflow and proceed with the following deployment steps.

Loading Exported Model#

Before loading the exported model, make sure that the VisionFlow dependency libraries have been initialized. For the detailed process of initializing the dependency libraries, please refer to the Initializing section.

After that, you can open your exported model as shown in the code below:

vflow::Model model("D:/path/to/my_first_project.vfmodel");
To be completed
visionflow.Model model = new visionflow.Model("D:/path/to/my_first_project.vfmodel");

Setting Inference Parameters#

We provide some convenient interfaces that allow you to read and modify certain parameters within the model during the deployment phase. Although in practical scenarios, the parameters that can be modified during the deployment phase are limited, we still allow the modification of any parameter within the model through the interface. After opening the model, you can modify the parameters by reading them from the model, making the necessary changes, and then saving them back to the model as shown below:

// 根据你的工程中的工具的名称读取相应工具中参数.
std::string segmentation_id = "Segmentation";
auto filter_param = model.get_param({segmentation_id, "filter.args"});
if (!filter_param) {
    std::cerr << "Filter parameter for Segmentation not exist." << std::endl;
    exit(-1);
}

// 在这里,我们修改了类别名为"MyClass"的缺陷的过滤参数.
filter_param->as<vflow::param::PolygonsFilterParameters>()
    .get_class_thresholds("MyClass")
    .set_enable(true)
    .set_area_range({100, 50000});

// 然后, 你可以将修改后的参数重新保存到模型中.
model.set_param({segmentation_id, "filter.args"}, *filter_param);
To be completed
// 根据你的工程中的工具的名称读取相应工具中参数.
string segmentation_id = "Segmentation";
// 这里读取了推理结果之后的过滤参数.
var tool_node_id = new visionflow.ToolNodeId(segmentation_id, "filter.args");
var param = model.get_param(tool_node_id);

// 可以序列化为json文件并输出.
// 注意这里是 `to_string()` 而不是 `ToString()`.
Console.WriteLine(param.to_json().to_string());

// 注意这里的类型的必须和节点的类型一致.
// 否则之后反序列化会出现问题.
var filter_args = new visionflow.param.PolygonsFilterParameters();

// 也可以反序列化输入内容.
filter_args.from_json(param.to_json());

// 在这里,我们修改了类别名为"1"的缺陷的过滤参数.
var area_range = new std.VectorInt {100, 5000};
filter_args.get_class_thresholds("1").set_enable(true).set_area_range(area_range);

// 然后, 你可以将修改后的参数重新保存到模型中.
model.set_param(tool_node_id, filter_args);

// 确认是否保存成功.
param = model.get_param(tool_node_id);
Console.WriteLine(param.to_json().to_string());

Warning

The parameters set using the model.set_param(***) interface are only effective within the currently opened model. After you close this model and reopen it, these modifications will be lost. If you want to permanently save these modifications, you need to create a backup of the opened model as shown in the code below:

model.resave_to("D:/other/path/model_2.vfmodel");
To be completed
model.resave_to("D:/other/path/model_2.vfmodel");

Inference#

Before starting the inference, we need to create a runtime that can execute based on the model data. There are various strategies for creating the runtime, but here, for simplicity, we will directly use the default parameters to run all the tools in the model:

vflow::runtime::AllTools strategy;
// 若你的模型中存在前面的模型或参数晚于后面的模型或参数更新的情况.
// 而你又确信这不是一种错误那么使用此选项忽略这些问题.
strategy.options.ignore_update_time_requirement = true;
// 如果的相机采集图像的流程没有注册到VisionFlow中,
// 那么你的流程图中对应的节点是虚拟节点,需启用此选项.
strategy.options.allow_unrealized_oper = true;
auto runtime = model.create_runtime(strategy);
To be completed
var strategy = new visionflow.runtime.AllTools();
// 若你的模型中存在前面的模型或参数晚于后面的模型或参数更新的情况.
// 而你又确信这不是一种错误那么使用此选项忽略这些问题.
strategy.options.ignore_update_time_requirement = true;
// 如果的相机采集图像的流程没有注册到VisionFlow中,
// 那么你的流程图中对应的节点是虚拟节点,需启用此选项.
strategy.options.allow_unrealized_oper = true;
var runtime = model.create_runtime(strategy);

After creating the runtime, we can use it to detect the images we want to detect. The runtime requires a Sample to store the input images, intermediate results, and the final output of the entire detection process. Therefore, we need to create a Sample and add our image to it as shown below, and then we can execute our vision processing flow on this sample:

std::string input_id = "Input"; // The id of the Input tool.

auto sample = runtime.create_sample();
auto image = vflow::Image::FromFile("D:/path/to/image.png");
vflow::helper::add_image_to_sample(sample, image, input_id);

// Then we can execute the processing flow on the sample.
runtime.execute(sample);
To be completed
string input_id = "Input"; // The id of the Input tool.

var sample1 = runtime.create_sample();
var image1 = visionflow.img.Image.FromFile("D:/path/to/save/1.bmp");
visionflow_helpers_global.add_image_to_sample(sample1, image1, input_id);

// Then we can execute the processing flow on the sample.
runtime.execute(sample1);

After execution, we can retrieve the detection results from the sample for each tool and process these results according to our requirements(The ID of each intermediate or final result can be obtained by referring to the detailed workflow diagram of each tool.):

auto segment_pred = sample.get({segmentation_id, "pred"});
std::cout << "Result: " << segment_pred->to_json().to_string() << std::endl;

const auto &segment_pred = result->as<vflow::props::PolygonRegionList>();
for (const auto &[id, region] : segment_pred) {
    std::cout << "Found a defect: " << id
              << ", name: " << region.name()
              << ", area: " << region.area() << std::endl;
}

// Draw the defects on the image and show the image:
vflow::img::draw(image, segment_pred.to_multi_polygons(), {10, 10, 240}, 2);
image.show();
To be completed
var pred_node_id = new visionflow.ToolNodeId(segmentation_id, "pred");
var result = sample1.get(pred_node_id);
var segment_pred = new visionflow.props.PolygonRegionList();
segment_pred.from_json(result.to_json());

// 输出推理结果的json内容.
// 注意这里是 `to_string()` 而不是 `ToString()`.
Console.WriteLine(segment_pred.to_json().to_string());

// 遍历推理结果.
var ids = segment_pred.keys();
foreach (var id in ids) {
    // 对照C++接口确定对应类型.
    var region = segment_pred.at(id) as visionflow.PolygonRegion;
    Console.WriteLine("id = {0}, region_name = {1}, region_area = {2}", id, region.name(), region.area());
}

// 设置颜色.
var color = new std.VectorInt();
color.Add(10);
color.Add(10);
color.Add(240);

// 在原图上画上推理结果并延时展示一会儿.
visionflow_img_global.draw(image1, segment_pred.to_multi_polygons(), color, 2);
image1.show(2000);

Full Example#

You can get the full online inference example below:

Online Inference Usage#
 1#include <exception>
 2#include <iostream>
 3#include <string>
 4
 5#include "visionflow/visionflow.hpp"
 6
 7namespace vflow = visionflow;
 8
 9void my_logger_output(int log_level, const char *content, size_t len) {
10  if (log_level > 2) {
11    std::cout << std::string(content, len) << std::endl;
12  }
13}
14
15int main(int /*argc*/, char ** /*argv*/) try {
16
17  vflow::InitOptions opts;
18
19  // 设置日志输出文件
20  opts.logger.file_sink = "visionflow.log";
21  // 设置是否将日志输出到标准输出终端
22  opts.logger.stdout_sink = true;
23  // 你可以通过设置日志输出的回调函数自定义处理VisionFlow输出的日志
24  // opts.logger.func_sink = my_logger_output;
25
26  // 设置语言为中文
27  opts.language = "zh_CN";
28
29  vflow::initialize(opts);
30
31  vflow::Model model("D:/path/to/save/my_first_project.vfmodel");
32
33  // 根据你的工程中的工具的名称读取相应工具中参数
34  std::string segmentation_id = "Segmentation";
35  auto filter_param = model.get_param({segmentation_id, "filter.args"});
36  if (!filter_param) {
37    std::cerr << "Filter parameter for Segmentation not exist." << std::endl;
38    exit(-1);
39  }
40
41  // 在这里,我们修改了类别名为"MyClass"的缺陷的过滤参数.
42  filter_param->as<vflow::param::PolygonsFilterParameters>()
43      .get_class_thresholds("MyClass")
44      .set_enable(true)
45      .set_area_range({0, 500000});
46
47  // 然后, 你可以将修改后的参数重新保存到模型中
48  model.set_param({segmentation_id, "filter.args"}, *filter_param);
49
50  model.resave_to("D:/other/path/model_2.vfmodel");
51
52  // 在开始推理之前,我们需要根据模型数据创建出一个可以执行的运行时,创建运行时有很多不同的策略,
53  // 在这里,为了简单起见,我们直接使用默认参数运行模型中所有的工具:
54  vflow::runtime::AllTools strategy;
55  // 若你的模型中存在前面的模型或参数晚于后面的模型或参数更新的情况
56  // 而你又确信这不是一种错误那么使用此选项忽略这些问题
57  strategy.options.ignore_update_time_requirement = true;
58  // 如果的相机采集图像的流程没有注册到VisionFlow中,
59  // 那么你的流程图中对应的节点是虚拟节点,需启用此选项
60  strategy.options.allow_unrealized_oper = true;
61  auto runtime = model.create_runtime(strategy);
62
63  // 在创建出运行时之后,我们就可以使用运行时来检测我们需要检测的图像了。
64  // 运行时需要一个样本来保存整个检测流程的输入图像,中间结果和最终的
65  // 输出,因此,我们需要像下面这样,先创建出一个样本,并将我们的图像添加到样本中:
66  std::string input_id = "Input";
67  while (true) {
68    auto sample = runtime.create_sample();
69    auto image = vflow::Image::FromFile("D:/path/to/image.png");
70    vflow::helper::add_image_to_sample(sample, image, input_id);
71    runtime.execute(sample);
72
73    auto result = sample.get({segmentation_id, "pred"});
74
75    std::cout << "Result: " << result->to_json().to_string() << std::endl;
76
77    const auto &segment_pred = result->as<vflow::props::PolygonRegionList>();
78    for (const auto &[id, region] : segment_pred) {
79      std::cout << "Found a defect: " << id << ", name: " << region.name()
80                << ", area: " << region.area() << std::endl;
81    }
82
83    // Draw the defects on the image and show the image:
84    vflow::img::draw(image, segment_pred.to_multi_polygons(), {10, 10, 240}, 2);
85    image.show();
86  }
87
88  return 0;
89
90} catch (const std::exception &ex) {
91  std::cerr << "Unexpected Escaped Exception: " << ex.what();
92  return -1;
93}
To be completed
Online Inference Usage#
  1using System;
  2using System.Runtime.InteropServices;
  3
  4public class Program
  5{
  6    public static void Main()
  7    {
  8        try
  9        {
 10            visionflow.InitOptions opts = new visionflow.InitOptions();
 11
 12            // 设置日志输出文件.
 13            opts.logger.file_sink = "visionflow.log";
 14            // 设置是否输出到终端.
 15            opts.logger.stdout_sink = true;
 16
 17            // 设置语言为中文.
 18            opts.language = "zh_CN";
 19
 20            visionflow_global.initialize(opts);
 21
 22            // 加载模型.
 23            visionflow.Model model = new visionflow.Model("D:/path/to/save/segmentation.vfmodel");
 24
 25            // 根据你的工程中的工具的名称读取相应工具中参数.
 26            string segmentation_id = "Segmentation";
 27            // 这里读取了推理结果之后的过滤参数.
 28            var tool_node_id = new visionflow.ToolNodeId(segmentation_id, "filter.args");
 29            var param = model.get_param(tool_node_id);
 30
 31            // 可以序列化为json文件并输出.
 32            // 注意这里是 `to_string()` 而不是 `ToString()`.
 33            Console.WriteLine(param.to_json().to_string());
 34
 35            // 注意这里的类型的必须和节点的类型一致.
 36            // 否则之后反序列化会出现问题.
 37            var filter_args = new visionflow.param.PolygonsFilterParameters();
 38
 39            // 也可以反序列化输入内容.
 40            filter_args.from_json(param.to_json());
 41
 42            // 在这里,我们修改了类别名为"1"的缺陷的过滤参数.
 43            var area_range = new std.VectorInt { 100, 10000 };
 44            filter_args.get_class_thresholds("1").set_enable(true).set_area_range(area_range);
 45
 46            // 然后, 你可以将修改后的参数重新保存到模型中.
 47            model.set_param(tool_node_id, filter_args);
 48
 49            // 确认是否保存成功.
 50            param = model.get_param(tool_node_id);
 51            Console.WriteLine(param.to_json().to_string());
 52
 53            // 在开始推理之前,我们需要根据模型数据创建出一个可以执行的运行时,创建运行时有很多不同的策略,
 54            // 在这里,为了简单起见,我们直接使用默认参数运行模型中所有的工具:
 55            var strategy = new visionflow.runtime.AllTools();
 56            // 若你的模型中存在前面的模型或参数晚于后面的模型或参数更新的情况.
 57            // 而你又确信这不是一种错误那么使用此选项忽略这些问题.
 58            strategy.options.ignore_update_time_requirement = true;
 59            // 如果的相机采集图像的流程没有注册到VisionFlow中,
 60            // 那么你的流程图中对应的节点是虚拟节点,需启用此选项.
 61            strategy.options.allow_unrealized_oper = true;
 62            var runtime = model.create_runtime(strategy);
 63
 64            // 在创建出运行时之后,我们就可以使用运行时来检测我们需要检测的图像了.
 65            // 运行时需要一个样本来保存整个检测流程的输入图像,中间结果和最终的
 66            // 输出,因此,我们需要像下面这样,先创建出一个样本,并将我们的图像添加到样本中:
 67            string input_id = "Input";
 68            while (true)
 69            {
 70                // 对于多张图片,自然也要创建多个样本:
 71                var sample1 = runtime.create_sample();
 72                var sample2 = runtime.create_sample();
 73                var image1 = visionflow.img.Image.FromFile("D:/path/to/save/1.bmp");
 74                var image2 = visionflow.img.Image.FromFile("D:/path/to/save/2.bmp");
 75
 76                visionflow_helpers_global.add_image_to_sample(sample1, image1, input_id);
 77                visionflow_helpers_global.add_image_to_sample(sample2, image2, input_id);
 78                runtime.execute(sample1);
 79
 80                // 获取推理后的预测结果.
 81                var pred_node_id = new visionflow.ToolNodeId(segmentation_id, "pred");
 82                var result = sample1.get(pred_node_id);
 83                var segment_pred = new visionflow.props.PolygonRegionList();
 84                segment_pred.from_json(result.to_json());
 85
 86                // 输出推理结果的json内容.
 87                // 注意这里是 `to_string()` 而不是 `ToString()`.
 88                Console.WriteLine(segment_pred.to_json().to_string());
 89
 90                // 遍历推理结果.
 91                var ids = segment_pred.keys();
 92                foreach (var id in ids)
 93                {
 94                    // 对照C++接口确定对应类型.
 95                    var region = segment_pred.at(id) as visionflow.PolygonRegion;
 96                    Console.WriteLine("id = {0}, region_name = {1}, region_area = {2}", id, region.name(), region.area());
 97                }
 98
 99                // 设置颜色.
100                var color = new std.VectorInt();
101                color.Add(10);
102                color.Add(10);
103                color.Add(240);
104
105                // 在原图上画上推理结果并延时展示一会儿.
106                visionflow_img_global.draw(image1, segment_pred.to_multi_polygons(), color, 2);
107                image1.show(2000);
108
109                // 执行另一张图.
110                runtime.execute(sample2);
111                result = sample2.get(pred_node_id);
112                segment_pred.from_json(result.to_json());
113                visionflow_img_global.draw(image2, segment_pred.to_multi_polygons(), color, 2);
114                image2.show(2000);
115            }
116        }
117        catch (Exception ex)
118        {
119            Console.WriteLine(ex);
120        }
121    }
122}