Clusters

The Cluster machines are running ProxMox to host LXC Containers

Currently we have 2 machines in place

Accessing via SSH

Your computer should have cluster1 and cluster2 in your SSH Config. To SSH into a cluster, just run the following:

1
ssh cluster1

You might be prompted for a password. If so, run this to add your local key to the clusters for passwordless login:

1
2
ssh-copy-id cluster1
ssh-copy-id cluster2

Accessing Proxmox

Promoxmox access details are specified in EC Jira IS-1

Redacted

Startup Broker

There is a system that can be installed on the clusters called startup broker which will only start up containers upon web request.

Some containers can be configured to be always on - the way this is done is to mark them as "start at boot" in Proxmox

Any containers that are not marked as start at boot will not be started at boot (obviously) and will also not be restarted after backups, so they will always get shut down every day during the overnight backup process

How it works, in a nutshell

The public nginx container takes a request and is configured to try the real container and then fail over to the startup container.

The startup container takes the request, figures out which container needs starting and then SSHs into the cluster host using a specially configured user to start that container.

After a short delay, the browser reloads the page at which point a new request goes through and should then access the now started container.

Installing

You install the startup broker system using:

1
2
cd /opt/Projects/snippets-edmondscommerce/Cluster/shellscripts/cluster/containerStartupBroker/setup
bash ./run.bash
The run.bash script runs three processes:

Create Container

This will create the startup broker container which is the one that handles the starting of containers. There is a container asset to handle this installation though you should not need to install it manually as it is handled by the setup.

Create User

This creates the special containerStartBroker user on the cluster machine

It is configured to not have a real shell, it's shell is actually a bash script which has one purpose - to start containers. The issue is that starting containers requires root privileges but we don't want there to be any way to easily get to root on the cluster from the public nginx container.

SSH Setup

This bit copies the SSH key from the startup broker container into the authorized keys folder of the containerStartBroker user on the cluster machine

Debugging

Check things in this order:

  • Nginx config on public nginx container
  • Network connectivity between public nginx and startup broker containers on the 172 network
  • SSH connectivity between startup broker and cluster - must have working SSH keys based login.

The real action is in the bash script run by the startup broker container. SSH into this container and then have a look at var/www/vhosts/startupBroker/startContainer.log

Configure Public Nginx For Exisiting Staging Containers

Access the Public Container

First, ssh into the cluster machine

Then pct list and find the public container

Then pct enter {container id} to get root on that container

1
2
3
ssh cluster1
pct list | grep public
pct enter 102

Here is the template that needs to be used, you will have to edit existing containers to match

Redacted

Note

The bits you are changing are adding the upstream block and then the proxy_pass line in the location block