Skip to main content

Input and Output

I/O in SiMa NEAT is explicit and contract-driven. You select how data enters and exits the pipeline, then tune runtime behavior around those contracts.

Use this page as a decision guide:

  • Choose how input enters: source-managed vs app-pushed.
  • Choose output style: rich samples vs normalized tensors.
  • Match that choice to your runtime pattern (service, stream, or batch/offline).

Input patterns

File/stream input groups

Use group helpers from the node-group APIs when the source is image/video/RTSP. They package common GStreamer source recipes and reduce boilerplate.

Language mapping:

  • C++: simaai::neat::nodes::groups::VideoInputGroup(...), RtspDecodedInput(...)
  • Python: neat.groups.video_input(...), neat.groups.rtsp_decoded_input(...)
#include "neat/session.h"
#include "neat/node_groups.h"

simaai::neat::Session session;

simaai::neat::nodes::groups::VideoInputGroupOptions vopt;
vopt.path = "/data/sample.mp4";
session.add(simaai::neat::nodes::groups::VideoInputGroup(vopt));

Use this pattern when:

  • Input comes from files, camera, or RTSP stream URLs.
  • You want decode/source behavior handled inside the pipeline.
  • You are building media-first flows (decode -> convert -> infer -> render/stream).

Push input (nodes::Input)

Use the input node pattern for application-driven frame/tensor push. This is the most common pattern for inference services.

Language mapping:

  • C++: session.add(simaai::neat::nodes::Input(...))
  • Python: session.add(neat.nodes.input(...))
#include "neat/session.h"
#include "neat/nodes.h"

simaai::neat::Session session;

simaai::neat::InputOptions iopt;
iopt.format = "RGB";
iopt.width = 224;
iopt.height = 224;
session.add(simaai::neat::nodes::Input(iopt));

Use this pattern when:

  • Your app already produces frames/tensors.
  • You need request/response or queue-controlled async inference.
  • You want explicit push/backpressure behavior with RunOptions.

Key options and contracts:

Output patterns

Rich output (nodes::Output)

Use the output node pattern when you need sample output with pull-side buffering policy.

Language mapping:

  • C++: session.add(simaai::neat::nodes::Output(...))
  • Python: session.add(neat.nodes.output(...))
#include "neat/session.h"
#include "neat/nodes.h"

simaai::neat::Session session;
session.add(simaai::neat::nodes::Input({}));
session.add(simaai::neat::nodes::Output(simaai::neat::OutputOptions::Latest()));

auto run = session.build(simaai::neat::Tensor{}, simaai::neat::RunMode::Async);
run.push(simaai::neat::Tensor{});
auto sample = run.pull(1000); // returns Sample (metadata + payload)

Use this pattern when:

  • You need full Sample metadata (stream/frame identity, payload details).
  • You apply custom post-processing/business logic after pull.
  • You want output buffering behavior controlled via OutputOptions.

Tensor-first output (add_output_tensor)

Use Session::add_output_tensor(...) for a simpler tensor-oriented output path with format/shape normalization.

Use this pattern when:

  • Your consumer expects predictable tensor format/shape.
  • You do not need richer media/sample envelope data.
  • You want a lower-boilerplate model-serving output path.
#include "neat/session.h"
#include "neat/nodes.h"

simaai::neat::Session session;
session.add(simaai::neat::nodes::Input({}));
session.add_output_tensor({});

auto run = session.build(simaai::neat::Tensor{}, simaai::neat::RunMode::Async);
run.push(simaai::neat::Tensor{});
auto out = run.pull_tensor(1000); // tensor-first consumption

RTSP mode

For server-style output, use Session::run_rtsp(...) and configure RtspServerOptions.

Use this when:

  • The pipeline should publish an endpoint for viewers/downstream systems.
  • You need long-running streaming service behavior.

Quick decision guide

  • Source is file/camera/RTSP: use input groups from nodes::groups.
  • Source is app-produced tensor/frame: use nodes::Input.
  • Need rich metadata-aware outputs: use nodes::Output.
  • Need normalized tensor outputs: use add_output_tensor(...).
  • Need network-served stream output: use run_rtsp(...).

See also

Tutorials