MediaPipe
MediaPipe is an open source framework with many libraries developed by Google for several artificial intelligence and machine learning solutions. These solutions range from generative artificial intelligence, real-time computer vision, natural language processing and audio techniques. These solutions can also be used on various platforms such as Android, JavaScript web, Python and iOS, supporting edge devices.
History
Google has long used MediaPipe in its products and services. Since 2012, it has been used for real-time analysis of video and audio on YouTube. Over time MediaPipe has been incorporated into many more products such as Gmail, Google Home, etc.MediaPipe's first stable release was version 0.5.0. It was made open source in June 2019 at the Conference on Computer Vision and Pattern Recognition in Long Beach, California, by Google Research. This initial release included only five pipelines examples: Object Detection, Face Detection, Hand Tracking, Multi-hand Tracking, and Hair Segmentation. From its initial release to April 2023, numerous pipelines have been made. In May 2025, MediaPipe Solutions was introduced. This transition offered more capabilities for on-device machine learning. MediaPipe is now under Google's subdivision, Google AI Edge.
Solutions
MediaPipe's available solutions are:- LLM Inference API
- Object detection
- Image classification
- Image segmentation
- Interactive segmentation
- Hand landmark detection
- Gesture Recognition
- Image embedding
- Face detection
- Face landmark detection
- Pose landmark detection
- Image generation
- Text classification
- Text embedding
- Language detector
- Audio Classification
- Face Detection
- Face Mesh
- Iris
- Hands
- Pose
- Holistic
- Selfie segmentation
- Hair segmentation
- Object detection
- Box tracking
- Instant motion tracking
- Objectron
- KNIFT
- AutoFlip
- MediaSequence
- YouTube 8M
Programming Language
The ability for MediaPipe to separate itself into a system of components allows for customization. Pre-built solutions are also available and it may help to start with these and slightly optimize them for an ideal output.
How MediaPipe Works
MediaPipe contains a multitude of different components that all work together to create a general purpose computer vision framework. Each component works in its own unique way with different architectures.Hand Tracking
MediaPipe includes a hand tracking system that has been designed to run efficiently on devices with limited computational resources. This works by estimating a set of 3D landmarks for each detected hand and is intended to remain stable across a wide range of environments including different poses, lightning conditions, and motions.MediaPipe works off of a pre-trained deep learning model that is trained to detect the palm area on human hands, which is done through a detector model named BlazePalm. Starting with the identification of the palm, MediaPipe is able to use the positioning of the palm as an input to a second model that predicts the positions of key landmarks that will represent the hand's structure.
MediaPipe continuously monitors the confidence of its predictions and re-runs detection when needed to maintain its accuracy, while temporal smoothing helps reduce the jitter between frames. For scenes with more than one hand, the process is repeated independently for each detected region.