Currently using version 4.4
Download ISO from Proxmox site and then run through the installer
Choose ZFS as the filesystem
Fix Keyboard Layout If Wrong¶
If the keyboard layout is wrong by default, fix that like this:
1 2 3 4 5
# This launches a UI for choosing keyboard layout dpkg-reconfigure keyboard-configuration # When you have found the right one (English UK) then run this to apply the changes service keyboard-setup restart
Install Basic Tools¶
apt-get update apt-get -y install vim git-core
Initialise SSH Config¶
Install Snippets Library¶
1 2 3 4 5 6
cd /opt/ mkdir Projects cd Projects git clone gitBare:~/repos/snippets-edmondscommerce cp snippets-edmondscommerce/Cluster/clusterAssets/root/.bashrc ~ source ~/.bashrc
Get Container Image for Centos¶
1 2 3
pveam update centos=$(pveam available --section system | grep centos-7 | cut -d ' ' -f 11) pveam download local $centos
Go to Web UI¶
Run the following command on the host to get the web UI URL
echo "Proxmox Web UI: https://$(hostname -i):8006"
Adding Extra Cluster Nodes¶
To set up a Proxmox cluster, simply install Proxmox as above and then do the following.
This should be done before containers are created
Lets imagine 2 machines - the original and the new.
If not already done, the first thing we need to do is build the cluster, so on the original machine run:
pvecm create ec-proxmox
1 2 3 4
root@pve pvecm create ec-proxmox Corosync Cluster Engine Authentication key generator. Gathering 1024 bits for key from /dev/urandom. Writing corosync key to /etc/corosync/authkey.
Add Machine to Cluster¶
Get the original machine IP address and then on the new machine run:
pvecm add [original machine IP]
1 2 3 4 5 6 7 8 9
root@pve2 pvecm add 192.168.122.234 copy corosync auth key stopping pve-cluster service backup old database waiting for quorum...OK generating node certificates merge known_hosts file restart services successfully added node 'pve2' to cluster.
This section is where it starts to get quite complex, we're dealing with various fairly high level networking concepts.
Things to be aware of:¶
The networking stuff is not just the Proxmox clusters but also the configuration on the HP switch and the Draytek router
There are multiple networks at play
- The external networks that are fully web facing. These are used by the public containers
- The internal network which all the machines in the office are on (192.168)
- Internal networking just for the containers inside a given cluster machine (172.16)
- The new and experimental NAT based internal container network that gives us gazzilions of IPs to play with
Container IDs to IP Addresses¶
We want to be able to have an effectively unlimited amount (very large limit) of internal containers to work with.
We also want to be able to deterministically calculate the IP address/Container ID whilst only knowing one of those facts.
To do this we will use the idea that container IDs are nine digits, split into 3 octets.
This can then be used to simplify routing later on
Bridge Configuration on the Proxmox Host¶
Note, I'm displaying the config file, but I'd strongly suggest creating the bridge via the Proxmox Web UI and then just adding in the NAT stuff as commented manually
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37
root@pve cat /etc/network/interfaces # network interface settings; autogenerated # Please do NOT modify this file directly, unless you know what # you're doing. # # If you want to manage part of the network configuration manually, # please utilize the 'source' or 'source-directory' directives to do # so. # PVE will preserve these directives, but will NOT its network # configuration from sourced files, so do not attempt to move any of # the PVE managed interfaces into external files! auto lo iface lo inet loopback iface eth0 inet manual auto vmbr0 iface vmbr0 inet static address 192.168.122.234 netmask 255.255.0.0 gateway 192.168.122.1 bridge_ports eth0 bridge_stp off bridge_fd 0 auto vmbr10 iface vmbr10 inet static address 10.234.0.0 netmask 255.0.0.0 bridge_ports none bridge_stp off bridge_fd 0 # THIS BIT BELOW HAS TO BE ADDED MANUALLY post-up echo 1 > /proc/sys/net/ipv4/ip_forward post-up iptables -t nat -A POSTROUTING -s '10.234.0.0/8' -o vmbr0 -j MASQUERADE post-down iptables -t nat -D POSTROUTING -s '10.234.0.0/8' -o vmbr0 -j MASQUERADE
In this configuration, we have added a bridge
vmbr10 and configured with CIDR
10.234.0.0/8 which equates to subnet mask
255.0.0.0 and gives us access to all IPs from
10.234.0.1 up to
10.234.255.254 - that's plenty.
In a range of IPs, the lowest is the default gateway (
10.0.0.0) and the highest is the broadcast (
10.255.255.255). (I don't really know what that means)
We are using NAT to allow the container to access the internet
Here is the output of
1 2 3 4 5 6 7 8
root@pve ip a [...SNIPPED...] 4: vmbr10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether fe:c1:38:f0:b0:19 brd ff:ff:ff:ff:ff:ff inet 10.234.0.0/8 brd 10.255.255.255 scope global vmbr10 valid_lft forever preferred_lft forever inet6 fe80::ec37:46ff:fe9c:e74/64 scope link valid_lft forever preferred_lft forever
Network Configuration for Container¶
When creating the container, we need to choose the 9 digit IP address Composed of
10 then the last octet of the host machine, eg
102, then an increment starting from
The bridge should be set to
vmbr10 - the one that is configured to handle the
The IP address should be set like this:
10.102.0.1/8 and the gateway should be set as
Here is an example config¶
The container host can access the container, but a machine on the local (192.168) network can not directly access the container
Dev Local Machine Access to Containers¶
To get a local dev machine able to directly access the containers, we need to add a route for each Proxmox host
Given the following scenario:¶
How we do this at the Desktop level (The old way of doing it).¶
The below is kept for reference but the way we should actually do it is by the router which you can find after the Desktop explanation.
Desktop Ethernet Adapter¶
To add the routes to the containers, we need to run:
Make these routes persistent¶
To make these routes permanent, we need to run the following
How we do this Using the router (the right way):¶
1 2 3 4 5 6 7 8
To set up a route to the containers at the router level we have to log into the router and head over to lan and then static route. We then need to click a number under index that isn't in use. Once we do this we can see a table listing out the following: destination IP address Subnet Mask Gateway IP address Network Interface We need to then fill out each box with the relevant information. Also don't forget to click enable on the top left or it won't save what you enter.
Public Nginx Containers to Internal Containers¶
The Public Nginx containers (that serve *.developmagento.co.uk and *.edmondscommerce.net) communicate to the internal containers via a private 172.16 network. This isolates the Public Nginx containers from the internal 192.168 network.
Each cluster has a virtual bridge
vmbr2 set up that is not connected to a real ethernet card and also has no bridge ports. This makes it entirely virtual and internal to the cluster machine itself.
Container IDs to IPs for the Cluster Internal Network¶
This is very similar to the logic in #container-ids-to-ip-addresses but as there is no issue with IPs being replicated across cluster machines (as each one has their own internal network) we can just change the last 2 octets.
All we do here is pull the last 2 octets from the main container IP and use this in the 172.16 range.
Allowing a Container to be Proxied Via Public Nginx Container¶
In order for a container to be able to proxied on the Public Nginx container, we need to add the bridge
vmbr2 and assign the correct IP address
Then the Nginx configuration needs to be set up on the Public Nginx container
Increasing container Disk size¶
Double click your container. Then go to Resources. Select Root Disk. Click "Resize disk" button. Specify increment size and it will expand the disk by the size you inserted.