edgetpu.basic.basic_engine

This API is deprecated: Instead try the PyCoral APIs.
class edgetpu.basic.basic_engine.BasicEngine(model_path, device_path=None)

Base inference engine to execute a TensorFlow Lite model on the Edge TPU.

Parameters:
  • model_path (str) – Path to a TensorFlow Lite (.tflite) file. This model must be compiled for the Edge TPU; otherwise, it simply executes on the host CPU.
  • device_path (str) – The device path for the Edge TPU this engine should use. This argument is needed only when you have multiple Edge TPUs and more inference engines than available Edge TPUs. For details, read how to use multiple Edge TPUs.
device_path()

Gets the path for the Edge TPU that’s associated with this inference engine.

See how to run multiple models with multiple Edge TPUs.

Returns:A string representing this engine’s Edge TPU device path.
get_all_output_tensors_sizes()

Gets the size of each output tensor.

A model may output several tensors, but the return from run_inference() and get_raw_output() concatenates them together into a 1-D array. So this function provides the size for each original output tensor, allowing you to calculate the offset for each tensor within the concatenated array.

Returns:An array (numpy.ndarray) with the length of each output tensor (this assumes that all output tensors are 1-D).
get_inference_time()

Gets the latency of the most recent inference.

This can be used by higher level engines for debugging.

Returns:A float representing the inference latency (in milliseconds).
get_input_tensor_shape()

Gets the shape required for the input tensor.

For models trained for image classification / detection, the shape is always [1, height, width, channels]. To be used as input for run_inference(), this tensor shape must be flattened into a 1-D array with size height * width * channels. To instead get that 1-D array size, use required_input_array_size().

Returns:A 1-D array (numpy.ndarray) representing the required input tensor shape.
get_num_of_output_tensors()

Gets the number of output tensors.

Returns:An integer representing number of output tensors.
get_output_tensor_size(index)

Gets the size of a specific output tensor.

Parameters:index (int) – The index position of the output tensor.
Returns:An integer representing the size of the output tensor.
get_raw_output()

Gets the output of the most recent inference.

This can be used by higher level engines for debugging.

Returns:A 1-D array (numpy.ndarray) representing the output tensor. If there are multiple output tensors, they are compressed into a single 1-D array. (Same as what’s returned by run_inference().)
model_path()

Gets the file path for model loaded by this inference engine.

Returns:A string representing the model file’s path.
required_input_array_size()

Gets the required size for the input_tensor given to run_inference().

This is the total size of the 1-D array, once the tensor shape is flattened.

Returns:An integer representing the required input tensor size.
run_inference(input, size=None)

Performs inference with a raw input tensor.

Parameters:
  • input – (numpy.ndarray): A 1-D array as the input tensor. You can query the required size for this array with required_input_array_size().
  • size (int) – input buffer size. When size is not None, it will throw exception if size does not match the expected input size, denoted by n. When size is None, it will throw exception when total input buffer size is smaller than n, and only use the first n bytes of the input buffer to set the input tensor, ignoring the remaining bytes if any in the buffer. (This behavior allows callers to use input buffers with padding bytes at the end, and have extra sanity check that the input matches the model’s expectation.)
Returns:

A 2-tuple with the inference latency in milliseconds (float) and a 1-D array (numpy.ndarray) representing the output tensor. If there are multiple output tensors, they are compressed into a single 1-D array. For example, if the model outputs 2 tensors with values [1, 2, 3] and [0.1, 0.4, 0.9], the returned 1-D array is [1, 2, 3, 0.1, 0.4, 0.9]. You can calculate the size and offset for each tensor using get_all_output_tensors_sizes(), get_num_of_output_tensors(), and get_output_tensor_size(). Note that the inference result array is a reference, which needs to be deep copied if it needs to be preserved before next inference call.

total_output_array_size()

Gets the expected size of the 1-D output array returned by run_inference() and get_raw_output().

Returns:An integer representing the output tensor size.