General purpose instances featuring full passthrough or an affordable fraction of a NVIDIA A100 GPUs.
The NVIDIA A100 Tensor Core GPU revolutionizes acceleration across all scales, driving the most advanced elastic data centers globally for AI, data analytics, and HPC applications. Anchored by the NVIDIA Ampere Architecture, the A100 stands as the core of the NVIDIA data center platform. Offering a staggering 20X boost in performance compared to the previous generation, the A100 can be segmented into seven GPU instances, dynamically adapting to fluctuating requirements. Introducing the A100 80GB variant, boasting the world's fastest memory bandwidth exceeding 2 terabytes per second (TB/s), effectively catering to the most substantial models and datasets.
Name | NVMe Storage | Memory | vCPUs | Transfer | Price | Regions |
---|---|---|---|---|---|---|
G1-gpu-a100-6m-1c-4g
| 70 GiB | 6 GB | 1 vCPUs | 1 TB | $0.1507 /hr $110 /mo | 3 |
G1-gpu-a100-12m-1c-8g
| 140 GiB | 12 GB | 1 vCPUs | 1 TB | $0.3014 /hr $220 /mo | 4 |
G1-gpu-a100-15m-2c-10g
| 170 GiB | 15 GB | 2 vCPUs | 2 TB | $0.4192 /hr $306 /mo | 2 |
G1-gpu-a100-30m-3c-20g
| 350 GiB | 30 GB | 3 vCPUs | 3 TB | $0.7534 /hr $550 /mo | 4 |
G1-gpu-a100-60m-6c-40g
| 700 GiB | 60 GB | 6 vCPUs | 6 TB | $1.3014 /hr $950 /mo | 2 |
G1-gpu-a100-120m-12c-80g
| 1400 GiB | 120 GB | 12 vCPUs | 10 TB | $2.6027 /hr $1 900 /mo | 2 |
G1-gpu-a100-240m-24c-160g
| 1400 GiB | 240 GB | 24 vCPUs | 10 TB | $5.2055 /hr $3 800 /mo | 1 |
G1-gpu-a100-480m-48c-320g
| 1400 GiB | 480 GB | 48 vCPUs | 15 TB | $10.4110 /hr $7 600 /mo | 1 |
G1-gpu-a100-960m-96c-640g
| 2200 GiB | 960 GB | 96 vCPUs | 25 TB | $20.8219 /hr $15 200 /mo | Sold out |
Every NVIDIA A100 unit boasts 80 GB of GDDR6 ECC memory, and a physical server can accommodate up to 8 GPUs. The g
value in the plan name signifies the quantity of available GDDR6 memory, measured in GB, for each respective plan.