The Seven Bridges Knowledge Center

The Seven Bridges Platform is a simple solution for doing bioinformatics at industrial scale. But sometimes, everyone needs a little help.

Get Started

List of available Amazon Web Services US East instances

If you use the Seven Bridges Platform on an AWS cloud infrastructure, chose an instance type from the list below and specify it as a value for the sbg:AWSInstanceType hint. See the AWS page on instance types for details on pricing.

List of instances with ephemeral storage

Name Cores RAM [GB] Storage [GB]
c3.large 2 3.75 32
c3.xlarge 4 7.5 80
c3.2xlarge 8 15 160
c3.4xlarge 16 30 320
c3.8xlarge 32 60 640
i2.xlarge 4 30.5 800
i2.2xlarge 8 61 1600
i2.4xlarge 16 122 3200
i2.8xlarge 32 244 6400
m3.medium 1 3.75 4
m3.large 2 7.5 32
m3.xlarge 4 15 80
m3.2xlarge 8 30 160
r3.large 2 15 32
r3.xlarge 4 30.5 80
r3.2xlarge 8 61 160
r3.4xlarge 16 122 320
r3.8xlarge 32 244 640
x1.16xlarge 64 976 1920
x1.32xlarge 128 1952 1920

List of instances with variable attached EBS storage

Name Cores RAM [GB]
c4.large 2 3.75
c4.xlarge 4 7.5
c4.2xlarge 8 15
c4.4xlarge 16 30
c4.8xlarge 36 60
c5.large 2 4
c5.xlarge 4 8
c5.2xlarge 8 16
c5.4xlarge 16 32
c5.9xlarge 36 72
c5.18xlarge 72 144
m4.large 2 8
m4.xlarge 4 16
m4.2xlarge 8 32
m4.4xlarge 16 64
m4.10xlarge 40 160
m4.16xlarge 64 256
m5.large 2 8
m5.xlarge 4 16
m5.2xlarge 8 32
m5.4xlarge 16 64
m5.12xlarge 48 192
m5.24xlarge 96 384
r4.large 2 15.25
r4.xlarge 4 30.5
r4.2xlarge 8 61
r4.4xlarge 16 122
r4.8xlarge 32 244
r4.16xlarge 64 488
r5.large 2 16
r5.xlarge 4 32
r5.2xlarge 8 64
r5.4xlarge 16 128
r5.12xlarge 48 384
r5.24xlarge 96 768

GPU Instances

Seven Bridges Platform also supports GPU instances. The first GPU family we’re introducing is Amazon EC2 P2. P2 Instances are powerful, scalable instances that provide GPU-based parallel compute capabilities. Designed for general-purpose GPU compute applications using CUDA and OpenCL, these instances are ideally suited for machine learning, molecular modeling, genomics, rendering, and other workloads requiring massive parallel floating point processing power.

Name
GPUs
vCPUs
RAM (GiB)

p2.xlarge

1

4

61

p2.8xlarge

8

32

488

p2.16xlarge

16

64

732

Creating Docker images containing tools that are run on GPU instances is similar to the process of creating Docker images with tools that are designed for CPU instances. The only major difference is that GPU tools have additional requirements for interaction with GPUs, which can be either OpenCL or CUDA. NVIDIA drivers come preinstalled and optimized according to the Amazon best practice for the specific instance family and are accessible from the docker container. It is recommended to use one of Docker images provided by NVIDIA as the base image. For tools that require CUDA, the list of supported images are available at https://hub.docker.com/r/nvidia/cuda/, and for tools that are based on OpenCL at https://hub.docker.com/r/nvidia/opencl. The rest of the procedure for creating and uploading a Docker image is the same as for tools designed to run on CPU instances. In case you have any problems with the setup, please contact our support.

When creating a Docker image containing GPU tools, it should be taken into account that older binaries are usually built for older GPU architectures and might not work on newer GPUs. If that is the case, these binaries can’t be used, and new ones should be built from source code.

List of available Amazon Web Services US East instances


Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.