AMD has had high-performance computing contracts lately, but to really compete with Nvidia, it needs to develop an alternative to Nvidia`s CUDA language.

AMD GPU presence

The $5.4 billion purchase of ATI Technologies by AMD in 2006 seemed like a strange event. The companies were not only in separate markets, but also on separate coasts, with ATI in the Toronto region of Canada and AMD in Sunnyvale, California.

The takeover probably saved AMD from bankruptcy, because it was the graphics business that kept the company afloat, while the Athlon/Opteron business was going nowhere. In many areas, graphic artists were generating more revenue than CPUs and probably saved the company from bankruptcy.

But those days are over and AMD is once again a very competitive CPU company and quarterly sales are very close to the $2 billion mark. While the CPU business is on fire, the GPU business continues to perform well.

In the second quarter of 2019, GPU shipments from AMD increased 9,8% over the first quarter, while those from Nvidia stagnated and those from Intel declined – 1,4%. An increase compared to the first quarter is a very good results, as the second quarter is typically weaker than the first quarter.

AMD and Nvidia are not breaking out any market segments, nor do they say what percentage comes from Enterprise/HPC/Supercomputing sales. The challenge for AMD is therefore to translate its game popularity into enterprise sales.

Competition in the field of high-performance computers.

Nvidia clearly dominates in high-performance computing (HPC), which also includes artificial intelligence (AI). AMD has no answer to Nvidia`s RTX 270/280 or the Tesla T4, but that hasn`t stopped AMD from remaining competitive. The Oak Ridge National Lab plans to build an exascaled supercomputer called Frontier with AMD Epyc processors and Radeon GPUs in 2021.

AMD CEO Lisa Su spoke about this at the recent Hot Chips semiconductor conference where she said that Frontier “a highly optimized GPU, a highly optimized GPU, a highly optimized coherent link between CPU and GPU, working with Cray on node-to-node latency characteristics really allows us to build a leadership system“.

AMD has also signed contracts with Google to provide its cloud-based stadium game console with 10.7Tflops/sec. more than the Microsoft and Sony consoles combined. And AMD has signed a contract with Chinese company Baidu to provide GPU-based computing for two years.

The problems is not so much the hardware, but the software. Nvidia has a special language called CUDA, first developed by Stanford professor Ian Buck, who is now the head of Nvidia`s AI efforts. It allows developers to write applications that fully utilize the GPU using a familiar C++ syntax. Nvidia then went to hundreds of universities and set them up to teach students how to use CUDA.

The end result is that universities around the world have thousands of graduates who know CUDA, and AMD has no equivalent.

The bottom line is that it is much harder to code for a Radeon than it is for a Tesla/Volta. AMD supports the open standard OpenCL library and the open source project HIP, which converts CUDA into portable C++ code.

The OpenCL standard was developed by Apple, but is now being further developed by Khronos. In recent years, the technology has evolved very well. This points to the fact that standard always perform better when there is something to gain.

For AMD to gain a foothold in the data center and HPC/AI against Nvidia, it needs a competitor from CUDA. Until two years ago, this was simply not possible because AMD was ultimately fighting for its survival. But now the time is ripe for the company to focus on software and give Nvidia the same opportunities that Intel gives it.

Thank you very much for your visit.