OneAPI (compute acceleration)
oneAPI is an open standard, adopted by Intel, for a unified application programming interface intended to be used across different computing accelerator architectures, including GPUs, AI accelerators and field-programmable gate arrays. It is intended to eliminate the need for developers to maintain separate code bases, multiple programming languages, tools, and workflows for each architecture.
oneAPI competes with other GPU computing stacks: CUDA by Nvidia and ROCm by AMD.
Specification
The oneAPI specification extends existing developer programming models to enable multiple hardware architectures through a data-parallel language, a set of library APIs, and a low-level hardware interface to support cross-architecture programming. It builds upon industry standards and provides an open, cross-platform developer stack.Data Parallel C++
is a programming language implementation of oneAPI, built upon the ISO C++ and Khronos Group SYCL standards. DPC++ is an implementation of SYCL with extensions that are proposed for inclusion in future revisions of the SYCL standard, including: unified shared memory, group algorithms, and sub-groups.Libraries
The set of APIs spans several domains, including libraries for linear algebra, deep learning, machine learning, video processing, and others.| Library Name | Short Name | Description |
| oneAPI DPC++ Library | oneDPL | Algorithms and functions to speed DPC++ kernel programming |
| oneAPI Math Kernel Library | oneMKL | Math routines including matrix algebra, FFT, and vector math |
| oneAPI Data Analytics Library | oneDAL | Machine learning and data analytics functions |
| oneAPI Deep Neural Network Library | oneDNN | Neural networks functions for deep learning training and inference |
| oneAPI Collective Communications Library | oneCCL | Communication patterns for distributed deep learning |
| oneAPI Threading Building Blocks | oneTBB | Threading and memory management template library |
| oneAPI Video Processing Library | oneVPL | Real-time video encode, decode, transcode, and processing |
The source code of parts of the above libraries is available on GitHub.
The oneAPI documentation also lists the "Level Zero" API defining the low-level direct-to-metal interfaces and a set of ray tracing components with its own APIs.
Licensing
The licensing of oneAPI components falls into three major categories: open‑source permissive licences, proprietary vendor licences, and hybrid models that combine elements of both. Here an overview of some components:| Component | Typical license / notes | Source URL |
| oneAPI Threading Building Blocks | “Apache 2.0 – open‑source project under UXL Foundation” | https://github.com/uxlfoundation/oneTBB |
| oneAPI Data Analytics Library | “Apache 2.0 – open‑source; Intel toolkit binaries may use Intel EULA” | https://github.com/uxlfoundation/oneDAL |
| oneAPI Deep Neural Network Library | “Apache 2.0 – open‑source under UXL Foundation” | https://github.com/uxlfoundation/oneDNN |
| oneAPI DPC++ Library | “Apache 2.0 – open‑source data‑parallel algorithms library” | https://github.com/oneapi-src/oneDPL |
| oneAPI Math Library | “Apache 2.0 – unified math interface library” | https://github.com/uxlfoundation/oneMath |
| oneAPI Math Kernel Library | “Intel Simplified Software License – binary redistribution under Intel terms” | https://www.intel.com/content/www/us/en/developer/articles/tool/onemkl-license-faq.html |
| oneAPI Collective Communications Library | “Apache 2.0 – open‑source communication layer” | https://github.com/uxlfoundation/oneCCL |
| oneAPI Video Processing Library | “Apache 2.0 – open‑source media‑processing interface” | https://github.com/uxlfoundation/oneVPL |
| oneAPI DPC++/C++ Compiler | “Open‑source front‑end under Apache 2.0 with LLVM exceptions; Intel binaries under Intel EULA” | https://github.com/intel/llvm |
| oneAPI Level Zero Loader & Runtime | “MIT License – open‑source GPU/accelerator runtime” | https://github.com/oneapi-src/level-zero |
| Intel Integrated Performance Primitives | “Intel Simplified Software License – closed‑source library in oneAPI toolkit” | https://community.intel.com/t5/Intel-oneAPI‑Math‑Kernel‑Library/Using‑community‑license‑of‑Intel‑MKL‑for‑multiple‑users/m-p/1095247 |
| Intel oneAPI Base Toolkit | “Commercial license – free download but subject to Intel’s terms” | https://alfasoft.com/ab/software/development-tools/high-performance-computing-hpc/intel-oneapi-base-toolkit/ |
| Intel oneAPI Base & IoT Toolkit | “Named‑user or seat‑based commercial license under Intel EULA” | https://alfasoft.com/ab/software/development-tools/mobile-and-embedded/intel-oneapi-base-iot-toolkit/ |
| Intel oneAPI HPC Toolkit | “Commercial binaries under Intel EULA/ISSL; not fully open‑source” | https://www.intel.com/content/www/us/en/docs/oneapi/installation-guide-linux/2023-0/list-available-toolkits-components-and-runtime.html |
| Intel oneAPI IoT Toolkit | “Commercial license for embedded/IoT workflows ” | https://www.intel.com/content/www/us/en/docs/oneapi/installation-guide-linux/2023-0/list-available-toolkits-components-and-runtime.html |
| Intel oneAPI Rendering Toolkit | “Some sub‑components open‑source under Apache 2.0; toolkit packaging commercial” | https://oneapi-src.github.io/oneapi-ci/ |
Permissive open‑source licences
These licences allow broad rights such as use, modification, distribution and carry OSI‑approval. Many oneAPI libraries use such licences, enabling community contribution and redistribution under minimal restriction.Proprietary vendor licences
Some oneAPI components may be distributed under proprietary or commercial licences or the Intel Simplified Software Licence ). Software released under the ISSL license are considered to not fully comply with standard open‑source definitions.Hardware abstraction layer
oneAPI Level Zero, the low-level hardware interface, defines a set of capabilities and services that a hardware accelerator needs to interface with compiler runtimes and other developer tools.Implementations
has released oneAPI production toolkits that implement the specification and add CUDA code migration, analysis, and debug tools. These include the Intel oneAPI DPC++/C++ Compiler, Intel Fortran Compiler, Intel VTune Profiler and multiple performance libraries.Codeplay has released an open-source layer to allow oneAPI and SYCL/DPC++ to run atop Nvidia GPUs via CUDA.
University of Heidelberg has developed a SYCL/DPC++ implementation for both AMD and Nvidia GPUs.
Huawei released a DPC++ compiler for their Ascend AI Chipset
Fujitsu has created an open-source ARM version of the oneAPI Deep Neural Network Library for their Fugaku CPU.