List of available Amazon Web Services US instances
The list below shows available AWS instances that you can choose and specify as a value for the sbg:AWSInstanceType
hint. All instances listed below are available in both US East (N.Virginia) and US West (Oregon) regions that can be selected as project locations on the Platform.
See the AWS page on instance types for details on pricing.
List of instances with ephemeral storage
Name | Cores | RAM [GB] | Storage [GB] | Network Performance (Gbps) |
---|---|---|---|---|
i2.xlarge | 4 | 30.5 | 800 | Moderate |
i2.2xlarge | 8 | 61 | 1600 | Moderate |
i2.4xlarge | 16 | 122 | 3200 | High |
i2.8xlarge | 32 | 244 | 6400 | 10 Gigabit |
i3.large | 2 | 15.25 | 1 x 0.475 NVMe SSD | Up to 10 |
i3.xlarge | 4 | 30.5 | 1 x 0.95 NVMe SSD | Up to 10 |
i3.2xlarge | 8 | 61 | 1 x 1.9 NVMe SSD | Up to 10 |
i3.4xlarge | 16 | 122 | 2 x 1.9 NVMe SSD | Up to 10 |
i3.8xlarge | 32 | 244 | 4 x 1.9 NVMe SSD | 10 |
i3.16xlarge | 64 | 488 | 8 x 1.9 NVMe SSD | 25 |
i3en.large | 2 | 16 | 1 x 1.25 NVMe SSD | Up to 25 |
i3en.xlarge | 4 | 32 | 1 x 2.5 NVMe SSD | Up to 25 |
i3en.2xlarge | 8 | 64 | 2 x 2.5 NVMe SSD | Up to 25 |
i3en.3xlarge | 12 | 96 | 1 x 7.5 NVMe SSD | Up to 25 |
i3en.6xlarge | 24 | 192 | 2 x 7.5 NVMe SSD | 25 |
i3en.12xlarge | 48 | 384 | 4 x 7.5 NVMe SSD | 50 |
i3en.24xlarge | 96 | 768 | 8 x 7.5 NVMe SSD | 100 |
x1.16xlarge | 64 | 976 | 1920 | 10 |
x1.32xlarge | 128 | 1952 | 1920 | 25 |
List of instances with variable attached EBS storage
- The default attached storage size is 1 TB, but it can be changed to anything between 2GB and 4 TB.
- Learn more from our EBS Customization Documentation
Name | Cores | RAM [GB] | Auto-scheduling* | EBS Bandwidth (Mbps) | Network Performance (Gbps) |
---|---|---|---|---|---|
c4.large | 2 | 3.75 | Yes | 500 | Moderate |
c4.xlarge | 4 | 7.5 | Yes | 750 | High |
c4.2xlarge | 8 | 15 | Yes | 1000 | High |
c4.4xlarge | 16 | 30 | Yes | 2000 | High |
c4.8xlarge | 36 | 60 | Yes | 4000 | 10 Gigabit |
c5.large | 2 | 4 | Yes | Up to 4,750 | Up to 10 |
c5.xlarge | 4 | 8 | Yes | Up to 4,750 | Up to 10 |
c5.2xlarge | 8 | 16 | Yes | Up to 4,750 | Up to 10 |
c5.4xlarge | 16 | 32 | Yes | 4,750 | Up to 10 |
c5.9xlarge | 36 | 72 | Yes | 9,500 | 10 |
c5.12xlarge | 48 | 96 | Yes | 9,500 | 12 |
c5.18xlarge | 72 | 144 | No | 19,000 | 25 |
c5.24xlarge | 96 | 192 | No | 19,000 | 25 |
m4.large | 2 | 8 | Yes | 450 | Moderate |
m4.xlarge | 4 | 16 | Yes | 750 | High |
m4.2xlarge | 8 | 32 | Yes | 1000 | High |
m4.4xlarge | 16 | 64 | Yes | 2000 | High |
m4.10xlarge | 40 | 160 | Yes | 4000 | 10 Gigabit |
m4.16xlarge | 64 | 256 | No | 10000 | 25 Gigabit |
m5.large | 2 | 8 | Yes | Up to 4,750 | Up to 10 |
m5.xlarge | 4 | 16 | Yes | Up to 4,750 | Up to 10 |
m5.2xlarge | 8 | 32 | Yes | Up to 4,750 | Up to 10 |
m5.4xlarge | 16 | 64 | Yes | 4,750 | Up to 10 |
m5.8xlarge | 32 | 128 | Yes | 6,800 | 10 |
m5.12xlarge | 48 | 192 | Yes | 9,500 | 12 |
m5.16xlarge | 64 | 256 | No | 13,600 | 20 |
m5.24xlarge | 96 | 384 | No | 19,000 | 25 |
r4.large | 2 | 15.25 | Yes | Up to 10 | |
r4.xlarge | 4 | 30.5 | Yes | Up to 10 | |
r4.2xlarge | 8 | 61 | Yes | Up to 10 | |
r4.4xlarge | 16 | 122 | Yes | Up to 10 | |
r4.8xlarge | 32 | 244 | Yes | 10 | |
r4.16xlarge | 64 | 488 | No | 25 | |
r5.large | 2 | 16 | Yes | Up to 4,750 | Up to 10 |
r5.xlarge | 4 | 32 | Yes | Up to 4,750 | Up to 10 |
r5.2xlarge | 8 | 64 | Yes | Up to 4,750 | Up to 10 |
r5.4xlarge | 16 | 128 | Yes | 4,750 | Up to 10 |
r5.8xlarge | 32 | 256 | Yes | 6,800 | 10 |
r5.12xlarge | 48 | 384 | Yes | 9,500 | 12 |
r5.16xlarge | 64 | 512 | No | 13,600 | 20 |
r5.24xlarge | 96 | 768 | No | 19,000 | 25 |
* Instances labelled with Yes in the auto-scheduling column are the instances that can be selected for task execution automatically based on the defined CPU and memory requirements. To be able to use instances that are not available for automatic scheduling, you must set the instance type explicitly using the sbg:AWSInstanceType
hint.
GPU Instances
The Platform also supports the following powerful, scalable GPU instances that deliver high performance compute in the cloud. Designed for general-purpose GPU compute applications using CUDA and OpenCL, these instances are ideally suited for machine learning, molecular modeling, genomics, rendering, and other workloads requiring massive parallel floating point processing power.
Name | GPUs | vCPUs | RAM (GiB) | EBS Bandwidth | Network Performance |
---|---|---|---|---|---|
p2.xlarge | 1 | 4 | 61 | High | |
p2.8xlarge | 8 | 32 | 488 | 10 Gbps | |
p2.16xlarge | 16 | 64 | 732 | 20 Gbps | |
p3.2xlarge | 1 Tesla v100 | 8 | 61 | 1.5 Gbps | Up to 10 Gbps |
p3.8xlarge | 4 Tesla v100 | 32 | 244 | 7 Gbps | 10 Gbps |
p3.16xlarge | 8 Tesla v100 | 64 | 488 | 14 Gbps | 25 Gbps |
g4dn.xlarge | 1 | 4 | 16 | Up to 3.5 Gbps | Up to 25 Gbps |
g4dn.2xlarge | 1 | 8 | 32 | Up to 3.5 Gbps | Up to 25 Gbps |
g4dn.4xlarge | 1 | 16 | 64 | 4.75 Gbps | Up to 25 Gbps |
g4dn.8xlarge | 1 | 32 | 128 | 9.5 Gbps | 50 Gbps |
g4dn.16xlarge | 1 | 64 | 256 | 9.5 Gbps | 50 Gbps |
g4dn.12xlarge | 4 | 48 | 192 | 9.5 Gbps | 50 Gbps |
Creating Docker images containing tools that are run on GPU instances is similar to the process of creating Docker images with tools that are designed for CPU instances. The only major difference is that GPU tools have additional requirements for interaction with GPUs, which can be either OpenCL or CUDA. NVIDIA drivers come preinstalled and optimized according to the Amazon best practice for the specific instance family and are accessible from the Docker container. It is recommended to use one of Docker images provided by NVIDIA as the base image. For tools that require CUDA, the list of supported images are available at https://hub.docker.com/r/nvidia/cuda/, and for tools that are based on OpenCL at https://hub.docker.com/r/nvidia/opencl. The rest of the procedure for creating and uploading a Docker image is the same as for tools designed to run on CPU instances. In case you have any problems with the setup, please contact our Support Team.
When creating a Docker image containing GPU tools, it should be taken into account that older binaries are usually built for older GPU architectures and might not work on newer GPUs. If that is the case, these binaries can’t be used, and new ones should be built from source code.
Updated over 2 years ago