Coral M.2 Accelerator A+E Key by seeed studio
Edge TPU-powered M.2 accelerator for Debian-based systems, enabling high-speed ML inferencing. It supports TensorFlow Lite and integrates with compatible card slots. Customers note power efficiency and effective performance with containerized Frigate setups
Highlights
- on-board Edge TPU coprocessor
- high-efficiency performance
- Debian Linux integration
Pros
- high-speed ML inferencing
- Edge TPU coprocessor for ON-device compute
- power-efficient: 2 TOPS per watt
- Debian Linux compatibility
- TensorFlow Lite support
Cons
- requires compatible card module slot
- depends on Debian-based system for best use
- limited to models compatible with Edge TPU
Best For
Features
- Performs high-speed ML inferencing The on-board Edge TPU coprocessor is capable of performing 4 trillion operations (tera-operations) per second (TOPS)
- using 0. 5 watts for each TOPS (2 TOPS per watt). For example
- it can execute state-of-the-art mobile vision models such as MobileNet v2 at 400 FPS
- in a power efficient manner.
- Works with Debian Linux Integrates with any Debian-based Linux system with a compatible card module slot.
- Supports TensorFlow Lite No need to build models from the ground up. TensorFlow Lite models can be compiled to run on the Edge TPU.
- Supports AutoML Vision Edge Easily build and deploy fast
- high-accuracy custom image classification models to your device with AutoML Vision Edge.