Several interviews and articles with a legendary pioneer in high performance computing (HPC), Professor Satoshi Matsuoka of the Global Scientific Information and Computing Center at Tokyo Institute of Technology, made the rounds in our office regarding his plans for the next generation “Tsubame 3” supercomputer and some of the technological challenges and limitations that he and his team have had to overcome during its development.
By way of background, Professor Matsuoka is the architect of the previous versions of the Tsubame supercomputer and one of the leading minds active in supercomputer architecture design today. Links to an excellent two-part article by Ken Strandberg on the Next Platform website are provided here for reference:
According to various reports, this forthcoming iteration of Tsubame, version 3.0, is being touted as “Japan’s fastest AI supercomputer” and is slated to come online sometime in the second half of 2017. It will work in conjunction with the existing Tsubame 2.5 supercomputer, and is estimated to achieve a computational speed of 12.2 double-precision petaflops and 64.3 half-precision petaflops. This would easily preserve its status as the fastest supercomputing center in Japan and one of the fastest overall supercomputers in the world.
Reports also indicate that the design and operation of this system is supported by the Government of Japan and the primary contractor on the project is SGI Japan. Hardware partners for the system are rumored to be NVIDIA for the GPUs and Intel Corporation, who are likely providing their forthcoming “Skylake” Xeon® processors for this application. Looking at images of the nodes that surfaced together with these articles, it is possible that the system will feature two CPUs and four Tesla P100 GPUs in each node. The entire system itself is projected to boast 540 blades with 2,160 GPUs in use to power its phenomenal processing capability.
A New Frontier: Integrated AI-HPC Hybrid Workflow
One of the new HPC frontiers that Tsubame 3 will explore is a relatively new idea that is described as an “integrated AI-HPC” workflow or “AI-HPC Hybrid” workflow. In this model, machine learning workloads will not simply run side by side with simulations, but will be tasked with further accelerating the simulation.
To further elaborate on the concept of integrated AI-HPC workflows, NextPlatform.com describes it in this manner:
“Suffice it to say, the idea is to integrate machine learning into the simulation, to do some of the computationally intensive stuff in a new way. So, as part of a climate model, you teach the system using machine learning to predict the weather by watching movies of the weather, or in astronomy, you use machine learning to remove the noise from the signal to find the interesting bits of a star field.”
From the perspective of a company deeply involved in HPC, it is exciting to read about the frontiers that this HPC pioneer is pushing towards, the barriers that are breaking and the known limits of HPS now and in the next few years. At AMI, our work in both UEFI BIOS Firmware with Aptio® V as well as BMC and remote management with our MegaRAC® family of products is closely related and instrumental to a number of HPC applications, so these topics are of great interest.
We hope that you also enjoyed this quick overview on the development Tsubame 3. With these announcements, it appears to have cemented its position as a longstanding member of the TOP 500 group of supercomputers for some time, and could possibly jump up the ranks from its current spot at number 40 on that prestigious list.
For additional background reading, consider the following articles as well:
- DataCenterDynamics.com: https://www.datacenterdynamics.com/content-tracks/servers-storage/tokyo-techs-tsubame-3-will-be-ai/hpc-hybrid/97843.fullarticle
- HPCwire.com: “https://www.hpcwire.com/2017/02/16/tokyo-techs-tsubame-3-0-supercomputer/
What are your thoughts on where trends in HPC such as AI-HPC Hybrid workloads and heterogeneous supercomputing are headed? We would be very happy to have your comments or requests in the comments section below. You may always also drop us a line via social media or our Contact Us form to get in touch. As always, thanks for reading!