TPUs
Tensor Processing Units designed specifically for machine learning matrix operations and neural network training.
Google DeepMind TPU
US · Mountain View, CATrillium TPU v6 (2024). 4.7x improvement over v5, custom Axion ARM CPUs.
Cerebras Systems
US · Sunnyvale, CAWafer-Scale Engine (WSE-3): 900K cores, 4T transistors. OpenAI $20B+ chip supply deal. $1B Series H (Feb 2026) at $23B valuation. Filed Nasdaq IPO April 2026 targeting $35B+ valuation.
SambaNova
US · Palo Alto, CAReconfigurable Dataflow Units (RDU). Intel acquisition rejected; $350M strategic investment from Intel Capital (Feb 2026) + partnership. SambaNova Cloud expanding on Intel Xeon infrastructure. Enterprise AI inference focus.
AWS (Trainium/Inferentia)
US · Seattle, WATrainium 2 and Inferentia 2. Anthropic training on 500K Trainium2 chips.
Google (TPU)
US · Mountain View, CATPU v7 Ironwood (GA April 2025): 4,614 TFLOPS FP8, 192GB HBM, scales to 9,216 chips / 42.5 exaflops. Inference-first architecture. Trillium (v6) remains deployed at scale.