NVIDIA Latest A100 AI Chip comes with 1.6 terabytes per second bandwidth

0

NVIDIA Launches A100 AI Chip with 1.6 TB/Sec bandwidth, 54 billion transistors and much more

Nvidia Corporation commonly referred to as Nvidia is an American technology company incorporated in Delaware and based in Santa Clara, California. It designs graphics processing units (GPUs) for the gaming and professional markets, as well as system on a chip unit (SoCs) for the mobile computing and automotive market. Its primary GPU product line, labeled “GeForce”, is in direct competition with Advanced Micro Devices’ (AMD) “Radeon” products. Nvidia expanded its presence in the gaming industry with its handheld Shield Portable, Shield Tablet, and Shield Android TV.

On 14th May 2020 NVIDIA launched its A100 AI chip which was to be launched in March 2020 but delayed due to the Coronavirus Pandemic. The CEO of NVIDIA Jensen Huang called it the ultimate instrument for advancing AI. Huang said it can make supercomputing tasks — which are vital in the fight against COVID-19 — much more cost-efficient and powerful than today’s more expensive systems.

The Nvidia A100 Chip comes with 54 billion transistors (the on-off switches that are the building blocks of all things electronic), and it can execute 5 petaflops of performance. The chip is said to be 20x more powerful than the previous-generation chip Volta.

You get all of the overhead of additional memory, CPUs, and power supplies of 56 servers … collapsed into one. The economic value proposition is really off the charts, and that’s the thing that is really exciting.

said Jensen Huang, founder, and CEO of NVIDIA.

The chip uses the same Ampere architecture that could be used in consumer applications such as Nvidia’s GeForce graphics chips. In contrast to Advanced Micro Devices (AMD), Nvidia is focused on creating a single microarchitecture for its GPUs for both commercial AI and consumer graphics use.

The powerful trends of cloud computing and AI are driving a tectonic shift in data center designs so that what was once a sea of CPU-only servers is now GPU-accelerated computing. NVIDIA A100 GPU is a 20x AI performance leap and an end-to-end machine learning accelerator — from data analytics to training to inference. For the first time, scale-up and scale-out workloads can be accelerated on one platform. NVIDIA A100 will simultaneously boost throughput and drive down the cost of data centers.

Huang added

Share.

About Author

Be Ready for the challenge

Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments