"PVEShell" - Easily manage Proxmox containers
November 2018: Using Salt SSH for Proxmox containers employing the host's ssh daemon
Motivation
I've been using containers on Linux for years (OpenVZ, Linux VServer). Later LXC with ProxmoxVE as convenient and powerful management wrapper became my containerization solution of choice. As configuration management solution, I selected Saltstack instead of Chef, Puppet, and Ansible for various reasons.
To manage Proxmox LXC containers with Salt, one either needs to install a Salt Minion in each container or alternatively install an ssh daemon in each container to be able to manage them using Salt SSH. Both solutions are neither elegant from resource consumption nor from complexity point of view. I wanted to manage my Proxmox LXC containers with Salt more easily.
Description
The idea is to manage the containers using Salt SSH with the host's ssh daemon as the entry point.
Implementation
To access a container using the host's ssh daemon, each container gets a user on the host system. The respective user has a login shell assigned that provides a shell within the respective container.
Basic principle
First we create a user with root permissions for each container. In the following example, a user for accessing the container with id 261 is created. The login shell is set to a script. The script is needed as a wrapper since the configured login shell cannot have command line arguments.
useradd -m --non-unique -u 0 -g 0 -s /home/scripts/pctshell261 container261
The login shell script needs to be owned by root and have execution permissions. It is short & simple and just enters the container with the needed id:
#!/bin/bash
/usr/sbin/pct enter 261
When the user "container261" now logs in via ssh, he gets a shell inside the container 261. Nice!
The complete implementation
The previously shown basic solution is not sufficient to allow container configuration management using Salt SSH. If you try using Salt SSH with it (after configuring access using Salt's ssh certificate), Salt SSH will hang indefinitely. The reason is that Salt SSH tries to call the shell with the "-c" parameter to execute an "scp" command that copies files for Salt execution to the other system. This is not supported by our simple login shell script. Let's improve the solution!
When we reuse the root user's home directory as the home directory of the container users, we do not need to configure certificate authentication for Salt SSH for each container user. If Salt SSH works for the host, it will also work for the container users. That makes things simpler.
useradd --non-unique -u 0 -g 0 -d /root -s /root/pctshell container261
As you see above, the "pctshell261" script got replaced by a generic "pctshell" script. The idea is to extract the id from the user name and use it within the script. That removes redundancy and is less error-prone
The second change is an extended login shell script that accepts the "-c" parameter to execute a command. We call "pct exec" for this.
#!/bin/bash
#Get container id using the end of the username string
container_id=$(echo $USER | grep -Eo '[0-9]+$')
#Exit if no container id could be derived from username
if [ -z "$container_id" ]; then
echo Username does not end with a numeric value that could be interpreted as container id ["$USER"] >&2
exit 1
fi
#In case no arguments are given, enter the container
if [ -z "$1" ]; then
/usr/sbin/pct enter $container_id
exit
fi
#In case a shell command shall be executed, do so
if [ "$1" = "-c" ]; then
shift
if [ "$(pct config $container_id | grep ostype)" = "ostype: alpine" ]; then
/usr/sbin/pct exec $container_id -- ash -c "$@"
else
/usr/sbin/pct exec $container_id -- bash -c "$@"
fi
exit
fi
#Exit in other cases
echo Unknown command line argument ["$1"] >&2
exit 2
Further details
The "scp" command needs to be available inside the containers to be managed. This can usually be achieved by installing the package "openssh-client". "scp" is needed to copy the Salt SSH scripts to the managed container.
For Salt SSH to function properly, the shell (above it is "/root/pveshell") also needs to be available in the same path within the container. This can simply be done by adding a symbolic link inside the container to the container's usual shell (for distributions with bash: "ln -s /bin/bash /root/pveshell"; for an Alpine container: "ln -s /bin/ash /root/pveshell")
Result
Mission accomplished! Now we can configure containers very easily using Salt SSH. We just need to create a user (as shown above) for each container.
If you find this useful or have comments, please send a short email to me. Thanks in advance!
Update December 2020
After being in use for more than two years without any problems, with salt-ssh version 3001.2 (released Nov. 3rd 2020) the code above stopped working. The salt developers started piping a code snippet to /bin/sh which does not seem to work with "$@" as before. I did not find a nice and elegant solution to it and used a workaround: salt-ssh is working with the code shown below. In case you know how to do this more elegantly, please drop me a note.
#!/bin/bash
#Get container id using the end of the username string
container_id=$(echo $USER | grep -Eo '[0-9]+$')
#Exit if no container id could be derived from username
if [ -z "$container_id" ]; then
echo Username does not end with a numeric value that could be interpreted as container id ["$USER"] >&2
exit 1
fi
#In case no arguments are given, enter the container
if [ -z "$1" ]; then
/usr/sbin/pct enter $container_id
exit
fi
#In case a shell command shall be executed, do so
if [ "$1" = "-c" ]; then
shift
if [ "$(pct config $container_id | grep ostype)" = "ostype: alpine" ]; then
command="ash"
else
command="bash"
fi
if [[ "$1" != /bin/sh* ]]; then
# This works for scp etc.
exec -a "$0" -- /usr/sbin/pct exec $container_id -- $command -c "$@"
else
# This works for Salt piping stuff to shell (/bin/sh << ...)
exec -a "$0" -- echo "$@" | /usr/sbin/pct exec $container_id -- $command
fi
exit
fi
#Exit in other cases
echo Unknown command line argument ["$1"] >&2
exit 2
Update February 2023
The solution has proven itself in practice in the previous years. Now another comment needs to be made: Starting with OpenSSH v9, the scp tool uses sftp for file transfer to reduce its attack surface. This results in Salt hosts running under Alpine Linux 3.15+ in no longer being able to provision Alpine Linux hosts via salt-ssh. To work around this issue, one needs to configure the Alpine hosts as follows:
apk add openssh-sftp-server
ln -s /usr/lib/ssh /usr/lib/openssh