Getting Started with VisionFlow#
We have explained how to add VisionFlow as a dependency in your project in Installing VisionFlow . Now, we will quickly demonstrate the usage of the main functional interfaces in VisionFlow through a simple example.
This example will include a complete workflow from creating a project, adding tools, adding data and annotations, training the model, to deploying the model. If you are not interested in how to create a project and train a model through the VisionFlow interface, but only want to know how to deploy a pre-trained model exported as a model file, then you can start reading from the Loading Exported Model section.
Initializing#
The VisionFlow library has some global settings that need to be specified at runtime and must be set before calling any other interfaces. These settings include:
Logging output settings: By setting the logging output options, you can customize how VisionFlow’s logs are output, such as specifying a file path, a callback function, standard output/terminal, or MSVC debugger.
Language preference: VisionFlow supports multiple languages. By default, the language preference is automatically set to match your operating system’s language during initialization. If you wish to use a different language, you can specify it through initialization parameters.
Here is an example of initializing VisionFlow:
#include <iostream>
#include "visionflow/visionflow.hpp"
namespace vflow = visionflow;
void my_logger_output(int log_level, const char *content, size_t len) {
if (log_level > 2) {
std::cout << std::string(content, len) << std::endl;
}
}
int main(int /*argc*/, char ** /*argv*/) try {
vflow::InitOptions opts;
// Set the log output file
opts.logger.file_sink = "visionflow.log";
// Set whether to output the logs to the standard output terminal
opts.logger.stdout_sink = true;
// You can customize the handling of VisionFlow's log output by setting a log output callback function
// opts.logger.func_sink = my_logger_output;
// Set the language to Chinese
opts.language = "zh_CN";
vflow::initialize(opts);
return 0;
}
To be completed
To be completed
Creating Project#
Project is the fundamental unit in VisionFlow for managing data and processing workflows. We must first create a Project before proceeding with any subsequent steps.
vflow::ProjectDescriptor desc;
desc.workspace_token = "D:/the/path/to/workspace";
desc.project_name = "my_first_project";
auto out_desc = vflow::Project::Create(desc);
std::cout << "My first VisionFlow project created at: "
<< out_desc.created_time << std::endl;
std::cout << "The VisionFlow version which the project created by: "
<< out_desc.sdk_version << std::endl;
// And then you can open the project with the desc or the output_desc
auto project = vflow::Project::Open(desc);
std::cout << "My first VisionFlow project name is: "
<< project->descriptor().project_name << std::endl;
To be completed
To be completed
Adding Tools#
VisionFlow provides a variety of tools that you can freely combine and connect to accomplish different processing workflows. You can find all the tools provided by VisionFlow in the Tools Usage and Graph.
In this example, we will add an Input tool and a Segmentation tool, and we connect the Segmentation tool to the output of the Input tool:
std::string input_id = project->add_tool("Input");
std::string segmentation_id = project->add_tool("Segmentation");
bool connected = project->auto_connect(input_id, segmentation_id);
if (!connected) {
std::cerr << "auto_connect failed, Please use Project::connect instead."
<< std::endl;
}
input_id = project.add_tool("Input")
segmentation_id = project.add_tool("Segmentation")
connected = project.auto_connect(input_id, segmentation_id);
if not connected:
print("auto_connect failed, Please use Project::connect instead.")
}
To be completed
Importing Images#
To be completed
To be completed
To be completed
Adding Label#
To be completed
To be completed
To be completed
Setting Training Parameters#
To be completed
To be completed
To be completed
Training and Configuration Parameters#
To be completed
To be completed
To be completed
Exporting the Model#
After training the model and configuring the parameters, you may want to deploy your model to other hosts. One way to do this is by directly copying the project to another host and using the copied project for deploying your detection workflow. However, since the project contains all the data used for training and validating the model, its size can be quite large, making direct copying inconvenient. To facilitate deployment, you can export all the models and configured parameters into a standalone file as shown below:
project->export_model("D:/path/to/save/my_first_project.vfmodel");
project.export_model("D:/path/to/save/my_first_project.vfmodel")
To be completed
If everything is successful, you should now be able to find the my_first_project.vfmodel file in the D:/path/to/save/ directory. You can copy this file to the machine where you need to deploy the detection workflow and proceed with the following deployment steps.
Loading Exported Model#
Before loading the exported model, make sure that the VisionFlow dependency libraries have been initialized. For the detailed process of initializing the dependency libraries, please refer to the Initializing section.
After that, you can open your exported model as shown in the code below:
vflow::Model model("D:/path/to/model.vfmodel");
model = vflow.Model("D:/path/to/model.vfmodel")
To be completed
Setting Inference Parameters#
We provide some convenient interfaces that allow you to read and modify certain parameters within the model during the deployment phase. Although in practical scenarios, the parameters that can be modified during the deployment phase are limited, we still allow the modification of any parameter within the model through the interface. After opening the model, you can modify the parameters by reading them from the model, making the necessary changes, and then saving them back to the model as shown below:
// 根据你的工程中的工具的名称读取相应工具中参数
std::string segmentation_id = "Segmentation";
auto filter_param = model.get_param({segmentation_id, "filter.args"});
if (!filter_param) {
std::cerr << "Filter parameter for Segmentation not exist." << std::endl;
exit(-1);
}
// 在这里,我们修改了类别名为"MyClass"的缺陷的过滤参数.
filter_param->as<vflow::param::PolygonsFilterParameters>()
.get_class_thresholds("MyClass")
.set_enable(true)
.set_area_range({100, 50000});
// 然后, 你可以将修改后的参数重新保存到模型中
model.set_param({segmentation_id, "filter.args"}, *filter_param);
To be completed
To be completed
Warning
The parameters set using the model.set_param(***) interface are only effective within the currently opened model. After you close this model and reopen it, these modifications will be lost. If you want to permanently save these modifications, you need to create a backup of the opened model as shown in the code below:
model.resave_to("D:/other/path/model_2.vfmodel");
model.resave_to("D:/other/path/model_2.vfmodel")
To be completed
Inference#
Before starting the inference, we need to create a runtime that can execute based on the model data. There are various strategies for creating the runtime, but here, for simplicity, we will directly use the default parameters to run all the tools in the model:
vflow::runtime::AllTools strategy;
// 若你的模型中存在前面的模型或参数晚于后面的模型或参数更新的情况
// 而你又确信这不是一种错误那么使用此选项忽略这些问题
strategy.options.ignore_update_time_requirement = true;
// 如果的相机采集图像的流程没有注册到VisionFlow中,
// 那么你的流程图中对应的节点是虚拟节点,需启用此选项
strategy.options.allow_unrealized_oper = true;
auto runtime = model.create_runtime(strategy);
To be completed
To be completed
After creating the runtime, we can use it to detect the images we want to detect. The runtime requires a sample to store the input images, intermediate results, and the final output of the entire detection process. Therefore, we need to create a sample and add our image to it as shown below, and then we can execute our vision processing flow on this sample:
std::string input_id = "Input"; // The id of the Input tool
auto sample = runtime.create_sample();
auto image = vflow::Image::FromFile("D:/path/to/image.png");
vflow::helper::add_image_to_sample(sample, image, input_id);
// Then we can execute the processing flow on the sample
runtime.execute(sample);
To be completed
To be completed
After execution, we can retrieve the detection results from the sample for each tool and process these results according to our requirements(The ID of each intermediate or final result can be obtained by referring to the detailed workflow diagram of each tool.):
auto segment_pred = sample.get({segmentation_id, "pred"});
std::cout << "Result: " << segment_pred->to_json().to_string() << std::endl;
const auto &segment_pred = result->as<vflow::props::PolygonRegionList>();
for (const auto &[id, region] : segment_pred) {
std::cout << "Found a defect: " << id
<< ", name: " << region.name()
<< ", area: " << region.area() << std::endl;
}
// Draw the defects on the image and show the image:
vflow::img::draw(image, segment_pred.to_multi_polygons(), {10, 10, 240}, 2);
image.show();
To be completed
To be completed
Full Example#
You can get the full online inference example below:
1#include <exception>
2#include <iostream>
3#include <string>
4
5#include "visionflow/visionflow.hpp"
6
7namespace vflow = visionflow;
8
9void my_logger_output(int log_level, const char *content, size_t len) {
10 if (log_level > 2) {
11 std::cout << std::string(content, len) << std::endl;
12 }
13}
14
15int main(int /*argc*/, char ** /*argv*/) try {
16
17 vflow::InitOptions opts;
18
19 // 设置日志输出文件
20 opts.logger.file_sink = "visionflow.log";
21 // 设置是否将日志输出到标准输出终端
22 opts.logger.stdout_sink = true;
23 // 你可以通过设置日志输出的回调函数自定义处理VisionFlow输出的日志
24 // opts.logger.func_sink = my_logger_output;
25
26 // 设置语言为中文
27 opts.language = "zh_CN";
28
29 vflow::initialize(opts);
30
31 vflow::Model model("D:/path/to/save/model.vfmodel");
32
33 // 根据你的工程中的工具的名称读取相应工具中参数
34 std::string segmentation_id = "Segmentation";
35 auto filter_param = model.get_param({segmentation_id, "filter.args"});
36 if (!filter_param) {
37 std::cerr << "Filter parameter for Segmentation not exist." << std::endl;
38 exit(-1);
39 }
40
41 // 在这里,我们修改了类别名为"cls-1"的缺陷的过滤参数.
42 filter_param->as<vflow::param::PolygonsFilterParameters>()
43 .get_class_thresholds("cls-1")
44 .set_enable(true)
45 .set_area_range({0, 500000});
46
47 // 然后, 你可以将修改后的参数重新保存到模型中
48 model.set_param({segmentation_id, "filter.args"}, *filter_param);
49
50 model.resave_to("D:/other/path/model_2.vfmodel");
51
52 // 在开始推理之前,我们需要根据模型数据创建出一个可以执行的运行时,创建运行时有很多不同的策略,
53 // 在这里,为了简单起见,我们直接使用默认参数运行模型中所有的工具:
54 vflow::runtime::AllTools strategy;
55 // 若你的模型中存在前面的模型或参数晚于后面的模型或参数更新的情况
56 // 而你又确信这不是一种错误那么使用此选项忽略这些问题
57 strategy.options.ignore_update_time_requirement = true;
58 // 如果的相机采集图像的流程没有注册到VisionFlow中,
59 // 那么你的流程图中对应的节点是虚拟节点,需启用此选项
60 strategy.options.allow_unrealized_oper = true;
61 auto runtime = model.create_runtime(strategy);
62
63 // 在创建出运行时之后,我们就可以使用运行时来检测我们需要检测的图像了。
64 // 运行时需要一个样本来保存整个检测流程的输入图像,中间结果和最终的
65 // 输出,因此,我们需要像下面这样,先创建出一个样本,并将我们的图像添加到样本中:
66 std::string input_id = "Input";
67 while (true) {
68 auto sample = runtime.create_sample();
69 auto image = vflow::Image::FromFile("D:/path/to/image.png");
70 vflow::helper::add_image_to_sample(sample, image, input_id);
71 runtime.execute(sample);
72
73 auto result = sample.get({segmentation_id, "pred"});
74
75 std::cout << "Result: " << result->to_json().to_string() << std::endl;
76
77 const auto &segment_pred = result->as<vflow::props::PolygonRegionList>();
78 for (const auto &[id, region] : segment_pred) {
79 std::cout << "Found a defect: " << id << ", name: " << region.name()
80 << ", area: " << region.area() << std::endl;
81 }
82
83 // Draw the defects on the image and show the image:
84 vflow::img::draw(image, segment_pred.to_multi_polygons(), {10, 10, 240}, 2);
85 image.show();
86 }
87
88 return 0;
89
90} catch (const std::exception &ex) {
91 std::cerr << "Unexpected Escaped Exception: " << ex.what();
92 return -1;
93}
To be completed
To be completed