List of available Amazon Web Services US instances

The list below shows available AWS instances that you can choose and specify as a value for the sbg:AWSInstanceType hint. All instances listed below are available in both US East (N.Virginia) and US West (Oregon) regions that can be selected as project locations on the Platform.

See the AWS page on instance types for details on pricing.

List of instances with ephemeral storage

NameCoresRAM [GB]Storage [GB]Network Performance (Gbps)
i2.xlarge430.5800Moderate
i2.2xlarge8611600Moderate
i2.4xlarge161223200High
i2.8xlarge32244640010 Gigabit
i3.large215.251 x 0.475 NVMe SSDUp to 10
i3.xlarge430.51 x 0.95 NVMe SSDUp to 10
i3.2xlarge8611 x 1.9 NVMe SSDUp to 10
i3.4xlarge161222 x 1.9 NVMe SSDUp to 10
i3.8xlarge322444 x 1.9 NVMe SSD10
i3.16xlarge644888 x 1.9 NVMe SSD25
i3en.large2161 x 1.25 NVMe SSDUp to 25
i3en.xlarge4321 x 2.5 NVMe SSDUp to 25
i3en.2xlarge8642 x 2.5 NVMe SSDUp to 25
i3en.3xlarge12961 x 7.5 NVMe SSDUp to 25
i3en.6xlarge241922 x 7.5 NVMe SSD25
i3en.12xlarge483844 x 7.5 NVMe SSD50
i3en.24xlarge967688 x 7.5 NVMe SSD100
x1.16xlarge64976192010
x1.32xlarge1281952192025

List of instances with variable attached EBS storage

NameCoresRAM [GB]Auto-scheduling*EBS Bandwidth (Mbps)Network Performance (Gbps)
c4.large23.75Yes500Moderate
c4.xlarge47.5Yes750High
c4.2xlarge815Yes1000High
c4.4xlarge1630Yes2000High
c4.8xlarge3660Yes400010 Gigabit
c5.large24YesUp to 4,750Up to 10
c5.xlarge48YesUp to 4,750Up to 10
c5.2xlarge816YesUp to 4,750Up to 10
c5.4xlarge1632Yes4,750Up to 10
c5.9xlarge3672Yes9,50010
c5.12xlarge4896Yes9,50012
c5.18xlarge72144No19,00025
c5.24xlarge96192No19,00025
m4.large28Yes450Moderate
m4.xlarge416Yes750High
m4.2xlarge832Yes1000High
m4.4xlarge1664Yes2000High
m4.10xlarge40160Yes400010 Gigabit
m4.16xlarge64256No1000025 Gigabit
m5.large28YesUp to 4,750Up to 10
m5.xlarge416YesUp to 4,750Up to 10
m5.2xlarge832YesUp to 4,750Up to 10
m5.4xlarge1664Yes4,750Up to 10
m5.8xlarge32128Yes6,80010
m5.12xlarge48192Yes9,50012
m5.16xlarge64256No13,60020
m5.24xlarge96384No19,00025
r4.large215.25YesUp to 10
r4.xlarge430.5YesUp to 10
r4.2xlarge861YesUp to 10
r4.4xlarge16122YesUp to 10
r4.8xlarge32244Yes10
r4.16xlarge64488No25
r5.large216YesUp to 4,750Up to 10
r5.xlarge432YesUp to 4,750Up to 10
r5.2xlarge864YesUp to 4,750Up to 10
r5.4xlarge16128Yes4,750Up to 10
r5.8xlarge32256Yes6,80010
r5.12xlarge48384Yes9,50012
r5.16xlarge64512No13,60020
r5.24xlarge96768No19,00025

* Instances labelled with Yes in the auto-scheduling column are the instances that can be selected for task execution automatically based on the defined CPU and memory requirements. To be able to use instances that are not available for automatic scheduling, you must set the instance type explicitly using the sbg:AWSInstanceType hint.

GPU Instances

The Platform also supports the following powerful, scalable GPU instances that deliver high performance compute in the cloud. Designed for general-purpose GPU compute applications using CUDA and OpenCL, these instances are ideally suited for machine learning, molecular modeling, genomics, rendering, and other workloads requiring massive parallel floating point processing power.

NameGPUsvCPUsRAM (GiB)EBS BandwidthNetwork Performance
p2.xlarge1461High
p2.8xlarge83248810 Gbps
p2.16xlarge166473220 Gbps
p3.2xlarge1 Tesla v1008611.5 GbpsUp to 10 Gbps
p3.8xlarge4 Tesla v100322447 Gbps10 Gbps
p3.16xlarge8 Tesla v1006448814 Gbps25 Gbps
g4dn.xlarge1416Up to 3.5 GbpsUp to 25 Gbps
g4dn.2xlarge1832Up to 3.5 GbpsUp to 25 Gbps
g4dn.4xlarge116644.75 GbpsUp to 25 Gbps
g4dn.8xlarge1321289.5 Gbps50 Gbps
g4dn.16xlarge1642569.5 Gbps50 Gbps
g4dn.12xlarge4481929.5 Gbps50 Gbps

Creating Docker images containing tools that are run on GPU instances is similar to the process of creating Docker images with tools that are designed for CPU instances. The only major difference is that GPU tools have additional requirements for interaction with GPUs, which can be either OpenCL or CUDA. NVIDIA drivers come preinstalled and optimized according to the Amazon best practice for the specific instance family and are accessible from the Docker container. It is recommended to use one of Docker images provided by NVIDIA as the base image. For tools that require CUDA, the list of supported images are available at https://hub.docker.com/r/nvidia/cuda/, and for tools that are based on OpenCL at https://hub.docker.com/r/nvidia/opencl. The rest of the procedure for creating and uploading a Docker image is the same as for tools designed to run on CPU instances. In case you have any problems with the setup, please contact our Support Team.

When creating a Docker image containing GPU tools, it should be taken into account that older binaries are usually built for older GPU architectures and might not work on newer GPUs. If that is the case, these binaries can’t be used, and new ones should be built from source code.