List of available Amazon Web Services EU instances

The list below shows available AWS instances that you can choose and specify as a value for the sbg:AWSInstanceType hint. See the AWS page on instance types for details on pricing.

List of instances with ephemeral storage

NameCoresRAM [GB]Storage [GB]
i2.xlarge430.5800
i2.2xlarge8611600
i2.4xlarge161223200
i2.8xlarge322446400
i3.large215.251 x 0.475 NVMe SSD
i3.xlarge430.51 x 0.95 NVMe SSD
i3.2xlarge8611 x 1.9 NVMe SSD
i3.4xlarge161222 x 1.9 NVMe SSD
i3.8xlarge322444 x 1.9 NVMe SSD
i3.16xlarge644888 x 1.9 NVMe SSD
i3en.large2161 x 1.25 NVMe SSD
i3en.xlarge4321 x 2.5 NVMe SSD
i3en.2xlarge8642 x 2.5 NVMe SSD
i3en.3xlarge12961 x 7.5 NVMe SSD
i3en.6xlarge241922 x 7.5 NVMe SSD
i3en.12xlarge483844 x 7.5 NVMe SSD
i3en.24xlarge967688 x 7.5 NVMe SSD
x1.16xlarge649761920
x1.32xlarge12819521920

List of instances with variable attached EBS storage

NameCoresRAM [GB]Auto-scheduling*
c4.large23.75Yes
c4.xlarge47.5Yes
c4.2xlarge815Yes
c4.4xlarge1630Yes
c4.8xlarge3660Yes
c5.large24Yes
c5.xlarge48Yes
c5.2xlarge816Yes
c5.4xlarge1632Yes
c5.9xlarge3672Yes
c5.12xlarge4896Yes
c5.18xlarge72144No
c5.24xlarge96192No
m4.large28Yes
m4.xlarge416Yes
m4.2xlarge832Yes
m4.4xlarge1664Yes
m4.10xlarge40160Yes
m4.16xlarge64256No
m5.large28Yes
m5.xlarge416Yes
m5.2xlarge832Yes
m5.4xlarge1664Yes
m5.8xlarge32128Yes
m5.12xlarge48192Yes
m5.16xlarge64256No
m5.24xlarge96384No
r4.large215.25Yes
r4.xlarge430.5Yes
r4.2xlarge861Yes
r4.4xlarge16122Yes
r4.8xlarge32244Yes
r4.16xlarge64488No
r5.large216Yes
r5.xlarge432Yes
r5.2xlarge864Yes
r5.4xlarge16128Yes
r5.8xlarge32256Yes
r5.12xlarge48384Yes
r5.16xlarge64512No
r5.24xlarge96768No

* Instances labelled with Yes in the auto-scheduling column are the instances that can be selected for task execution automatically based on the defined CPU and memory requirements. To be able to use instances that are not available for automatic scheduling, you must set the instance type explicitly using the sbg:AWSInstanceType hint.

GPU Instances

The Platform also supports the following powerful, scalable GPU instances that deliver high performance compute in the cloud. Designed for general-purpose GPU compute applications using CUDA and OpenCL, these instances are ideally suited for machine learning, molecular modeling, genomics, rendering, and other workloads requiring massive parallel floating point processing power.

Name

GPUs

vCPUs

RAM (GiB)

p2.xlarge

1

4

61

p2.8xlarge

8

32

488

p2.16xlarge

16

64

732

p3.2xlarge

1 Tesla v100

8

61

p3.8xlarge

4 Tesla v100

32

244

p3.16xlarge

8 Tesla v100

64

488

g4dn.xlarge

1

4

16

g4dn.2xlarge

1

8

32

g4dn.4xlarge

1

16

64

g4dn.8xlarge

1

32

128

g4dn.16xlarge

1

64

256

g4dn.12xlarge

4

48

192

Creating Docker images containing tools that are run on GPU instances is similar to the process of creating Docker images with tools that are designed for CPU instances. The only major difference is that GPU tools have additional requirements for interaction with GPUs, which can be either OpenCL or CUDA. NVIDIA drivers come preinstalled and optimized according to the Amazon best practice for the specific instance family and are accessible from the Docker container. It is recommended to use one of Docker images provided by NVIDIA as the base image. For tools that require CUDA, the list of supported images are available at https://hub.docker.com/r/nvidia/cuda/, and for tools that are based on OpenCL at https://hub.docker.com/r/nvidia/opencl. The rest of the procedure for creating and uploading a Docker image is the same as for tools designed to run on CPU instances. In case you have any problems with the setup, please contact our Support Team.

When creating a Docker image containing GPU tools, it should be taken into account that older binaries are usually built for older GPU architectures and might not work on newer GPUs. If that is the case, these binaries can’t be used, and new ones should be built from source code.