* NOTE: Image may not exactly match the product

NVIDIA A30 24GB HBM2 Memory Ampere GPU Tesla Data Center Accelerator

The NVIDIA A30 is a powerful data center accelerator built on the Ampere architecture, featuring 24GB of high-bandwidth HBM2 memory for demanding AI and high-performance computing workloads. This Tesla-class GPU delivers exceptional performance for machine learning training and inference, scientific computing, and data analytics while maintaining energy efficiency. Key advantages include massive parallel processing capabilities, optimized tensor operations for AI frameworks, multi-instance GPU technology for workload isolation, and enterprise-grade reliability. The A30’s large memory capacity enables handling of complex models and datasets, while its advanced cooling design ensures consistent performance in data center environments. This accelerator provides organizations with the computational power needed to accelerate breakthrough discoveries, reduce time-to-insight, and scale AI applications efficiently across cloud and on-premises infrastructure.

Quote1 Piece(MOQ)

Premium Client Discounts Available

BrandNVIDIA
Products StatusNew
ApplicationWorkstation
ROPs96
InterfacePCIe 4.0

NVIDIA A30 24GB HBM2 Memory Ampere GPU Tesla Data Center Accelerator

Bring accelerated performance to every enterprise workload with NVIDIA A30 Tensor Core GPUs. With NVIDIA Ampere architecture Tensor Cores and Multi-Instance GPU (MIG), it delivers speedups securely across diverse workloads, including AI inference at scale and high-performance computing (HPC) applications. By combining fast memory bandwidth and low-power consumption in a PCIe form factor—optimal for mainstream servers—A30 enables an elastic data center and delivers maximum value for enterprises.

Enterprise-Ready Utilization

A30 with MIG maximizes the utilization of GPU-accelerated infrastructure. With MIG, an A30 GPU can be partitioned into as many as four independent instances, giving multiple users access to GPU acceleration. MIG works with Kubernetes, containers, and hypervisor-based server virtualization. MIG lets infrastructure managers offer a right-sized GPU with guaranteed QoS for every job, extending the reach of accelerated computing resources to every user.

FP64 5.2 teraFLOPS
FP64 Tensor Core 10.3 teraFLOPS
FP32 10.3 teraFLOPS
TF32 Tensor Core 82 teraFLOPS | 165 teraFLOPS*
BFLOAT16 Tensor Core 165 teraFLOPS | 330 teraFLOPS*
FP16 Tensor Core 165 teraFLOPS | 330 teraFLOPS*
INT8 Tensor Core 330 TOPS | 661 TOPS*
INT4 Tensor Core 661 TOPS | 1321 TOPS*
Media engines 1 optical flow accelerator (OFA)
1 JPEG decoder (NVJPEG)
4 video decoders (NVDEC)
GPU memory 24GB HBM2
GPU memory bandwidth 933GB/s
Interconnect PCIe Gen4: 64GB/s
Third-gen NVLINK: 200GB/s**
Form factors Dual-slot, full-height, full-length (FHFL)
Max thermal design power (TDP) 165W
Multi-Instance GPU (MIG) 4 GPU instances @ 6GB each
2 GPU instances @ 12GB each
1 GPU instance @ 24GB
Virtual GPU (vGPU) software support NVIDIA AI Enterprise for VMware
NVIDIA Virtual Compute Server
Brand

NVIDIA

Products Status

New

Application

Workstation

ROPs

96

Interface

PCIe 4.0

Memory Size

24GB

Bus Width

3072bit

Cores

3804

Memory Type

HBM2E

Product Name

NVIDIA TESLA A30 24G

GPU Series

NVIDIA Tesla GPU Series

P/N

A30 24G

Packaging Details

Work Packages

Inquiry Now

Contact us for more discounts!

Click or drag a file to this area to upload.
.doc, .xls, .pdf, .png, .jpg, .xlsx, .csv formats only
Inquire
Scroll to Top

Request Quote

Click or drag a file to this area to upload.
.doc, .xls, .pdf, .png, .jpg, .xlsx, .csv formats only