C++ inferencing APIs
TensorFlow Lite utilities
The following APIs simplify your code when working with a tflite::Interpreter.
-
namespace
coral¶ Functions
-
bool
MatchShape(absl::Span<const int> shape, const std::vector<int> &pattern)¶ Checks whether a vector/tensor shape matches a dimensional pattern.
Negative numbers in the pattern indicate the corresponding shape dimension can be anything. Use -1 in the pattern for consistency.
- Parameters
shape: The shape you want to evaluate.pattern: The pattern to compare against.
- Return
True if the shape matches, False if not.
-
absl::Span<const int>
TensorShape(const TfLiteTensor &tensor)¶ Gets the tensor shape.
-
int
TensorSize(const TfLiteTensor &tensor)¶ Gets the tensor size.
-
template<typename
T>
absl::Span<const T>TensorData(const TfLiteTensor &tensor)¶ Gets the immutable data from the given tensor.
-
template<typename
T>
absl::Span<T>MutableTensorData(const TfLiteTensor &tensor)¶ Gets the mutable data from the given tensor.
-
template<typename
InputIt, typenameOutputIt>
OutputItDequantize(InputIt first, InputIt last, OutputIt d_first, float scale, int32_t zero_point)¶ Dequantizes the specified vector space.
-
template<typename
T, typenameOutputIt>
OutputItDequantize(absl::Span<const T> span, OutputIt d_first, float scale, int32_t zero_point)¶ Returns a dequantized version of the given vector span.
-
template<typename
T>
std::vector<T>DequantizeTensor(const TfLiteTensor &tensor)¶ Returns a dequantized version of the given tensor.
-
template<typename
InputIt, typenameOutputIt>
OutputItQuantize(InputIt first, InputIt last, OutputIt d_first, float scale, int32_t zero_point)¶ Quantizes the specified vector space.
-
absl::Status
MakeEdgeTpuInterpreter(const tflite::FlatBufferModel &model, edgetpu::EdgeTpuContext *tpu_context, tflite::ops::builtin::BuiltinOpResolver *resolver, tflite::StatefulErrorReporter *error_reporter, std::unique_ptr<tflite::Interpreter> *interpreter)¶ Creates a new interpreter instance for an Edge TPU model.
Also consider using
MakeEdgeTpuInterpreterOrDie().- Parameters
model: The tflite model.tpu_context: The Edge TPU context, fromcoral::GetEdgeTpuContext(). If left null, the given interpreter will not resolve an Edge TPU delegate. PoseNet custom op is always supported.resolver: Optional. May be null to use a default resolver.error_reporter: Optional. May be null to use default error reporter, but beware that if null, tflite runtime error messages will not return.interpreter: The pointer to receive the new interpreter.
-
std::unique_ptr<tflite::Interpreter>
MakeEdgeTpuInterpreterOrDie(const tflite::FlatBufferModel &model, edgetpu::EdgeTpuContext *tpu_context = nullptr, tflite::ops::builtin::BuiltinOpResolver *resolver = nullptr, tflite::StatefulErrorReporter *error_reporter = nullptr)¶ Returns a new interpreter instance for an Edge TPU model, crashing if it cannot be created.
For example:
const auto model = coral::LoadModelOrDie(model_path); auto edgetpu_context = coral::ContainsEdgeTpuCustomOp(*model) ? coral::GetEdgeTpuContextOrDie() : nullptr; auto interpreter = coral::MakeEdgeTpuInterpreterOrDie(*model, edgetpu_context.get());- Parameters
model: The tflite model.tpu_context: The Edge TPU context, fromcoral::GetEdgeTpuContext(). If left null, the given interpreter will not resolve an Edge TPU delegate. PoseNet custom op is always supported.resolver: Optional. May be null to use a default resolver.error_reporter: Optional. May be null to use default error reporter, but beware that if null, tflite runtime error messages will not return.
- Return
The new interpreter instance.
-
absl::Status
SetTensorBuffer(tflite::Interpreter *interpreter, int tensor_index, const void *buffer, size_t buffer_size)¶ Replaces existing tensor buffer with the provided one.
Caller owns provided buffer. Tensor quantization parameters are preserved. This function is a required ‘hack’ for performance reasons until this functionality would become a part of TensorFlow Lite API.
-
std::shared_ptr<edgetpu::EdgeTpuContext>
GetEdgeTpuContext(const std::string &device, const edgetpu::EdgeTpuManager::DeviceOptions &options = {})¶ Returns TPU context or nullptr if requested TPU context is not available.
Parameter
device:”” any TPU device
”usb” any TPU device on USB bus
”pci” any TPU device on PCIe bus
”:N” N-th TPU device, e.g. “:0”
”usb:N” N-th TPU device on USB bus, e.g. “usb:0”
”pci:N” N-th TPU device on PCIe bus, e.g. “pci:0”
Parameter
options: See edgetpu.h for details.All TPUs are always enumerated in the same order assuming hardware configuration doesn’t change (no added/removed devices between enumerations). Under the assumption above, the same index N will always point to the same device.
Consider 2 USB devices and 4 PCIe devices connected to the host. The way to reference specifically USB devices: “usb:0”, “usb:1”. The way to reference specifically PCIe devices: “pci:0”, “pci:1”, “pci:2”, “pci:3”. The generic way to reference all devices (no assumption about device type): “:0”, “:1”, “:2”, “:3”, “:4”, “:5”.
-
std::shared_ptr<edgetpu::EdgeTpuContext>
GetEdgeTpuContextOrDie(const std::string &device, const edgetpu::EdgeTpuManager::DeviceOptions &options = {})¶ The same as above but crashes if requested TPU context is not available.
-
std::shared_ptr<edgetpu::EdgeTpuContext>
GetEdgeTpuContext(absl::optional<edgetpu::DeviceType> device_type = absl::nullopt, absl::optional<int> device_index = absl::nullopt, const edgetpu::EdgeTpuManager::DeviceOptions &options = {})¶ The same as previously defined
GetEdgeTpuContextexceptdeviceparameter is replaced with two separate ones:device_typeanddevice_index.Custom options would only be passed when
device_typeanddevice_indexare non-empty.
-
std::shared_ptr<edgetpu::EdgeTpuContext>
GetEdgeTpuContextOrDie(absl::optional<edgetpu::DeviceType> device_type = absl::nullopt, absl::optional<int> device_index = absl::nullopt, const edgetpu::EdgeTpuManager::DeviceOptions &options = {})¶ The same as above but crashes if requested TPU context is not available.
-
std::unique_ptr<tflite::FlatBufferModel>
LoadModelOrDie(const std::string &path)¶ Load a tflite model at the given file path or die trying.
-
std::unique_ptr<tflite::FlatBufferModel>
LoadModelOrDie(const flatbuffers::FlatBufferBuilder &fbb)¶ Load a tflite model via flatbuffer or die trying.
-
absl::Status
InvokeWithMemBuffer(tflite::Interpreter *interpreter, const void *buffer, size_t in_size, tflite::StatefulErrorReporter *reporter = nullptr)¶ Invokes
tflite::Interpreterusing a given buffer as an input tensor.- Parameters
interpreter: An initialized interpreter.buffer: The interpreter input. We assume there is only one tensor.in_size: The number of elements in the input buffer, which can have padding elements at the end.in_sizecan be larger than the input tensor size, denoted by n, and only the first n elements of the input buffer will be used.in_sizecan not be smaller than n.reporter: Optional. If left null, tflite runtime error messages will not be returned. To get tflite runtime error messages,reportermust be set to the one that is used to create interpreter.
-
absl::Status
InvokeWithDmaBuffer(tflite::Interpreter *interpreter, int dma_fd, size_t in_size, tflite::StatefulErrorReporter *reporter = nullptr)¶ Invokes
tflite::Interpreterusing a given DMA file descriptor as an input tensor.Works only for Edge TPU models running on PCIe Edge TPU devices.
- Parameters
interpreter: An initialized interpreter.dma_fd: The DMA file descriptor to use as input.in_size: The number of elements in the input buffer, which can have padding elements at the end.in_sizecan be larger than the input tensor size, denoted by n, and only the first n elements of the input buffer will be used.in_sizecan not be smaller than n.reporter: Optional. If left null, tflite runtime error messages will not be returned. To get tflite runtime error messages,reportermust be set to the one that is used to create interpreter.
-
bool
ContainsEdgeTpuCustomOp(const tflite::FlatBufferModel &model)¶ Checks whether a tflite model contains any Edge TPU custom operator.
-
std::unordered_set<std::string>
GetInputTensorNames(const tflite::Interpreter &interpreter)¶ Returns all input tensor names for the given tflite::Interpreter.
-
const TfLiteTensor *
GetInputTensor(const tflite::Interpreter &interpreter, const char *name)¶ Returns the input tensor matching
namein the given tflite::Interpreter.Returns nullptr if such tensor does not exist.
-
bool
Image classification
Use the following APIs with image classification models.
-
namespace
coral Functions
-
std::vector<Class>
GetClassificationResults(absl::Span<const float> scores, float threshold = -std::numeric_limits<float>::infinity(), size_t top_k = std::numeric_limits<size_t>::max())¶ Converts classification output tensors into a list of ordered classes.
- Parameters
scores: The classification output tensor (dequantized).threshold: The score threshold for results. All returned results have a score greater-than-or-equal-to this value.top_k: The maximum number of predictions to return.
- Return
The top_k
Classpredictions, <score, label_id>, ordered by score (first element has the highest score).
-
std::vector<Class>
GetClassificationResults(const tflite::Interpreter &interpreter, float threshold = -std::numeric_limits<float>::infinity(), size_t top_k = std::numeric_limits<size_t>::max())¶ Gets results from a classification model as a list of ordered classes.
- Parameters
interpreter: The already-invoked interpreter for your classification model.threshold: The score threshold for results. All returned results have a score greater-than-or-equal-to this value.top_k: The maximum number of predictions to return.
- Return
The top_k
Classpredictions, <score, label_id>, ordered by score (first element has the highest score).
-
struct
Class¶ - #include <adapter.h>
Represents a single classification result.
-
std::vector<Class>
Object detection
Use the following APIs with object detection models.
-
namespace
coral Functions
-
std::vector<Object>
GetDetectionResults(absl::Span<const float> bboxes, absl::Span<const float> ids, absl::Span<const float> scores, size_t count, float threshold = -std::numeric_limits<float>::infinity(), size_t top_k = std::numeric_limits<size_t>::max())¶ Converts detection output tensors into a list of SSD results.
- Parameters
bboxes: Bounding boxes of detected objects. Four floats per object (box-corner encoding [ymin1,xmin1,ymax1,xmax1,ymin2,xmin2,…]).ids: Label identifiers of detected objects. One float per object.scores: Confidence scores of detected objects. One float per object.count: The number of detected objects (all tensors defined above have valid data for only this number of objects).threshold: The score threshold for results. All returned results have a score greater-than-or-equal-to this value.top_k: The maximum number of predictions to return.
- Return
The top_k
Objectpredictions, <id, score, bbox>, ordered by score (first element has the highest score).
-
std::vector<Object>
GetDetectionResults(const tflite::Interpreter &interpreter, float threshold = -std::numeric_limits<float>::infinity(), size_t top_k = std::numeric_limits<size_t>::max())¶ Gets results from a detection model as a list of ordered objects.
- Parameters
interpreter: The already-invoked interpreter for your detection model.threshold: The score threshold for results. All returned results have a score greater-than-or-equal-to this value.top_k: The maximum number of predictions to return.
- Return
The top_k
Objectpredictions, <id, score, bbox>, ordered by score (first element has the highest score).
-
struct
Object¶ - #include <adapter.h>
Represents a detected object.
-
std::vector<Object>
-
namespace
coral Functions
-
template<typename
T>
BBox<T>Intersection(const BBox<T> &a, const BBox<T> &b)¶ Gets a
BBoxrepresenting the intersection between two given boxes.
-
template<typename
Is this content helpful?