OpenVINO
OpenVINO is an open-source software toolkit developed by Intel for optimizing and deploying deep learning models. It supports several popular model formats and categories, such as large language models, computer vision, and generative AI.
OpenVINO is optimized for Intel hardware, but offers support for ARM/ARM64 processors. It sees great use in AI Sound Processing drivers when tied with Intel's Gaussian & Neural Accelerator.
Based in C++, it extends API support for C and Python, as well as Node.js.
OpenVINO is cross-platform and free for use under Apache License 2.0.
Workflow
The simplest OpenVINO usage involves obtaining a model and running it as is. Yet for the best results, a more complete workflow is suggested:- obtain a model in one of supported frameworks,
- convert the model to OpenVINO IR using the OpenVINO Converter tool,
- optimize the model, using training-time or post-training options provided by .
- execute inference, using OpenVINO Runtime by specifying one of several inference modes.
OpenVINO model format
Models of the supported formats may also be used for inference directly, without prior conversion to OpenVINO IR. Such an approach is more convenient but offers fewer optimization options and lower performance, since the conversion is performed automatically before inference. Some pre-converted models can be found in the Hugging Face repository.
The supported model formats are:
- PyTorch
- TensorFlow
- TensorFlow Lite
- ONNX
- PaddlePaddle
- JAX/Flax
OS support