General purpose instances featuring full passthrough or an affordable fraction of a NVIDIA A100 GPUs.


The NVIDIA A100 Tensor Core GPU revolutionizes acceleration across all scales, driving the most advanced elastic data centers globally for AI, data analytics, and HPC applications. Anchored by the NVIDIA Ampere Architecture, the A100 stands as the core of the NVIDIA data center platform. Offering a staggering 20X boost in performance compared to the previous generation, the A100 can be segmented into seven GPU instances, dynamically adapting to fluctuating requirements. Introducing the A100 80GB variant, boasting the world's fastest memory bandwidth exceeding 2 terabytes per second (TB/s), effectively catering to the most substantial models and datasets.

G1 gpu NVIDIA A100

Name NVMe Storage Memory vCPUs Transfer Price Regions
G1-gpu-a100-6m-1c-4g 70 GiB6 GB1 vCPUs1 TB $110 /mo 3
G1-gpu-a100-12m-1c-8g 140 GiB12 GB1 vCPUs1 TB $220 /mo 4
G1-gpu-a100-15m-2c-10g 170 GiB15 GB2 vCPUs2 TB $306 /mo 4
G1-gpu-a100-30m-3c-20g 350 GiB30 GB3 vCPUs3 TB $550 /mo 4
G1-gpu-a100-60m-6c-40g 700 GiB60 GB6 vCPUs6 TB $950 /mo 5
G1-gpu-a100-120m-12c-80g 1400 GiB120 GB12 vCPUs10 TB $1 900 /mo 4
G1-gpu-a100-240m-24c-160g 1400 GiB240 GB24 vCPUs10 TB $3 800 /mo 4
G1-gpu-a100-480m-48c-320g 1400 GiB480 GB48 vCPUs15 TB $7 600 /mo 2
G1-gpu-a100-960m-96c-640g 2200 GiB960 GB96 vCPUs25 TB $15 200 /mo Sold out

Every NVIDIA A100 unit boasts 80 GB of GDDR6 ECC memory, and a physical server can accommodate up to 8 GPUs. The g value in the plan name signifies the quantity of available GDDR6 memory, measured in GB, for each respective plan.