I am delighted to introduce our second-generation IPU platform with greater processing power, more memory and built-in scalability for handling extremely large Machine Intelligence workloads.
The IPU-Machine M2000 is a plug-and-play Machine Intelligence compute blade that has been designed for easy deployment and supports systems that can grow to massive scale. The slim 1U blade delivers one PetaFlop of Machine Intelligence compute and includes integrated networking technology, optimized for AI scale-out, inside the box.
Each IPU-Machine M2000 is powered by four of our brand new 7nm Colossus™ Mk2 GC200 IPU processors, and is fully supported by our Poplar® software stack.
Users of our Mk1 IPU products can be assured that their existing models and systems will run seamlessly on these new Mk2 IPU systems but will deliver an incredible 8X step up in performance when compared to our already class-leading first-generation Graphcore IPU products.

The design of our IPU-Machine M2000 allows customers to build datacenter-scale systems of up to 64,000 IPUs, in IPU-PODâ„¢ configuration, that deliver 16 ExaFlops of Machine Intelligence compute. Our new IPU-Machine M2000 is capable of handling even the toughest Machine Intelligence training or large-scale deployment workloads.
You can get started with a single IPU-Machine M2000 box, directly connected to one of your existing CPU-servers, or add up to a total of eight IPU-Machine M2000s connected to this one server. For larger systems, you can use our rack-scale IPU-POD64, comprising 16 IPU-Machine M2000s built into a standard 19-inch rack and scale these racks out to deliver datacenter-scale Machine Intelligence compute.

Connecting IPU-Machine M2000s and IPU-PODs at scale is made possible by our new IPU-Fabricâ„¢ technology, which has been designed from the ground-up for Machine Intelligence communication and delivers a dedicated low latency fabric that connects IPUs across the entire datacenter.
Our Virtual-IPU software integrates with workload management and orchestration software to easily serve many different users for training and inference, and allows the available resources to be adapted and reconfigured from job to job.
Whether you are using a single IPU or thousands for your Machine Intelligence workload, ³Ò°ù²¹±è³ó³¦´Ç°ù±ð’s&²Ô²ú²õ±è;Poplar SDK makes this simple. You can use your preferred AI framework, such as TensorFlow or PyTorch, and from this high-level description, Poplar will build the complete compute graph, capturing the computation, the data and the communication. It then compiles this compute graph and builds the runtime programs that manage the compute, the memory management and the networking communication, to take full advantage of the available IPU hardware.
If you’re looking to add Machine Intelligence compute into your datacenter, there’s nothing more powerful, flexible or easier to use than a Graphcore IPU-Machine M2000.
Innovation and advantage
Graphcore customers span automotive, consumer internet, finance, healthcare, research and more.
The number of corporations, organisations and research institutions using Graphcore systems is growing rapidly and includes Microsoft, Oxford Nanopore, EspresoMedia, the University of Oxford, Citadel and Qwant.
Graphcore’s technology is also being evaluated by J.P. Morgan to see if its solutions can accelerate the bank’s advances in AI, specifically in Natural Language Processing and speech recognition.
With the launch of the IPU-Machine M2000 and IPU POD64, the competitive advantage that we are able offer is extended even further.
³Ò°ù²¹±è³ó³¦´Ç°ù±ð’s&²Ô²ú²õ±è;latest product line is made possible by a range of ambitious technological innovations across compute, data, and communication, that deliver the industry-leading performance customers expect.
Compute
At the heart of every IPU-Machine M2000 is our new Graphcore Colossus™ Mk2 GC200 IPU. Developed using TSMC’s latest 7nm process technology, each chip contains more than 59.4 billion transistors on a single 823sqmm die, making it the most complex processor ever made.
