Documentation

How it works

Elastic Rack lets you create a Docker engine, just like the one on your computer, but running in the cloud.

When you sign up, you can create an engine, or you might have been invited to one that already exists. Each engine has a unique randomly-generated name.

Use this to set a DOCKER_HOST environment variable for your local Docker client. The command runs in the cloud.

DOCKER_HOST=ssh://engine-name@rack.ws docker run hello-world

A tiny bash script is also provided to turn that into ./rack run hello-world.

Automatic HTTPS

By default, your engine is private, meaning no-one on the internet can reach your containers. But if you bind a container to port 80, it gets its own HTTPS endpoint at https://engine-name.rack.ws

DOCKER_HOST=ssh://engine-name@rack.ws docker run -d -p 80:80 nginx

Custom Domains

Create a CNAME record for my.domain.com pointing to engine-name.rack.ws

Then apply a rack.domain label to your container:

DOCKER_HOST=ssh://engine-name@rack.ws docker run -d -p 80:80 --label rack.domain=my.domain.com nginx

For apex domains, you can use an ALIAS record instead, but you must also create a TXT record named rack.apex with value engine-name.rack.ws

Additional HTTPS endpoints on one engine

Expose additional containers by listening on a public port, and then using 2 labels:

rack.domain — a custom domain as above, or your-choice.engine-name.rack.ws

rack.http — the container's host HTTP port

Non-HTTP endpoints

It is possible to expose public TCP and UDP endpoints — email support@elasticrack.com

Sharing Docker access

Simply log in and send an email invitation. Elastic Rack synchronises SSH keys automatically, and revoking access is instant.

You can create additional Docker engines, for example to share them with different groups of people, or bill them separately.

Docker Compose

Docker comes with Compose, which lets you easily work with multiple containers as a project, and it works great with Elastic Rack without any special steps.

For example, to deploy your docker-compose.yml (with the helper script):

./rack compose up -d --build

Docker automatically isolates Compose projects, so it's an easy way to deploy many applications and/or environments to a single engine. With Elastic Rack, it all works like it does on your local Docker engine.

To deploy multiple applications, just use different Compose projects. By default, Docker uses the name of your working directory, or you can pass it with your command like docker compose -p pied-piper.

To apply environment-specific configuration, Docker's suggested approach is a great place to start. If you follow that convention, you can start with just three files:

Your local docker compose commands work as before, but if you create environment-specific helper scripts e.g. ./production:

#!/usr/bin/env bash
DOCKER_HOST=ssh://engine-name@rack.ws docker compose -p app-production -f docker-compose.yml -f docker-compose.production.yml $@

...then working with production as simple as:

  ./production ps
  ./production up -d --build
  ./production logs -f

Check it into source control, so everyone with access to the engine can deploy instantly.

Staging environments are just an extra file.

You can also run CI in the same way, on the same engine to skip image push/pull and remove the need for dedicated runners.

Support

Elastic Rack is currently in beta, so please email support@elasticrack.com for personal support.