When using Gitlab as git platform it usually happens that even with in the early development of our application we want to start exploring the powerful continuous integration system offered by the website: the Gitlab CI. After some playing with auto compiling and auto testing we want to try the auto deploy, we may want that the application, after building and testing, is uploaded to the server for example, and perhaps we also need to execute some pre and post build commands on the remote server. This is what exactly happened creating this site, so now I want to share with you how I got the system to work.

SSH and SSHFS

The most powerful tool to use in this case is SSH with no doubt. SSH is not just a simple protocol that allows you to open a shell on a remote node but it offers a lot of other services. In this post I will show you how to:

  • Execute a command on a remote machine
  • Mount a remote folder

These are the main needs for a Heroku-like auto deploying.

Setup the server

Before starting to exploit the CI we have to do some setup in our server that will host the application, take these as security suggestions.

CI is my username

The first step is to create a new user on the deploying server (we suppose to have a Linux server, in particular CentOS 7). The purpose of this action is that this user will have access only to a certain part of the filesystem and it will not obviously able to run superuser commands (but we will also see how to set up the server to allow the execution only of specific sudo commands). In order to do this we can use the command

$ useradd ci

this command will create a user called ci and its home folder, generally in /home/ci. Therefore if you want specify a customized home folder you can do

$ useradd -g /my/preferred/homefolder ci

Starting and Hardening SSH

The second part of the server setup regards SSH. In order to ensure an higher level of security it’s suggested to disable the login via password and to enable only login via SSH key. In order to do this we can edit the SSH configuration file in /etc/ssh/sshd_config. Make sure that the following lines are uncommented

PermitRootLogin no

AuthorizedKeysFile .ssh/authorized_keys
PasswordAuthentication no
ChallengeResponseAuthentication no # disable the interactive login

# In case you have troubles
GSSAPIAuthentication no
GSSAPICleanupCredentials no

Now, before restarting the ssd daemon, if you logged in with username and password you have to copy SSH public key of your local host in the file /home/username/.ssh/authorized_keys. This file will contain, one per line, the SSH public keys of hosts allowed to login as the current username 1. At this point you can restart the ssh daemon with

$ sudo systemctl restart sshd

Now we have to allow the temporary docker container, that will deploy our application to login with SSH to the server, as the user ci, and for this reason we generate a SSH keypair in a spare machine (as a VM, or even in your localhost) by hitting the command

$ ssh-keygen

this will generate, by default, in /home/<username>/.ssh/

  • a public key, called id_rsa.pub
  • a private key, called id_rsa

These two keys will be the keys used by the docker container run by gitlab-ci when deploying, so we have to authorize that public key. For doing this put the id_rsa.pub content as a line of the file /home/ci/.ssh/authorized_keys of the remote server (beside also your local host public key, if you want to manually test if everything is working – you can try with $ ssh [email protected]). Now the server is completely configured to work with the CI, but remember that you have to set correct permissions to file and folder that we want to be edited by the ci.

Set up Gitlab

Now we have to access to our Gitlab repository in which we want to enable the auto deploy; go to the Settings menu on the right, then CI/CD. In the section Secret Variables we add two variables:

  • ID_RSA with the value the content of the file id_rsa
  • ID_RSA_PUB with the value the content of the file id_rsa.pub

Note that in both varibles do not put the trailing new line in the text area. We will see in the next section that these two variables will be put in .ssh/ folder of the docker container.

Let’s see a basic ci job that will work with the configuration seen before. The docker container used is based on alpine linux, a very light linux distribution, since we only need basic commands when deploying. Here’s what you would have in the .gitlab-ci.yml file:

# Deploying
deployssh:
  image: alpine
  stage: deploy
  script:
    - apk update
    - apk add openssh sshfs
# Install the SSH keys
    - mkdir ~/.ssh
    - echo "$ID_RSA" >> ~/.ssh/id_rsa
    - echo "$ID_RSA_PUB" >> ~/.ssh/id_rsa.pub
    - chmod 600 ~/.ssh/id_rsa
    - chmod 600 ~/.ssh/id_rsa.pub
# Start the ssh daemon and add the private key
    - eval $(ssh-agent -s)
    - ssh-add ~/.ssh/id_rsa
    - rm -rfv ~/.ssh/known_hosts
# Mount remote dir
    - "sshfs -p 22 [email protected]:/var/www/html /mnt -o StrictHostKeyChecking=no"
    # Do something with the mounted dir /mnt
# Execute a remote command
    - ssh [email protected] "ls"

Let’s see step by step the keypoints of the configuration. Remember that before starting any step you have to make sure that sshfs is installed in the container.

Installing the SSH keys

The first thing that we have to do is to install the key pair which public key has been registered in the authorized_keys of the ci user in the server. Since we stored the key pair as secret variables, we simply echo the variables in the correct position.

# Install the SSH keys
    - mkdir ~/.ssh
    - echo "$ID_RSA" >> ~/.ssh/id_rsa
    - echo "$ID_RSA_PUB" >> ~/.ssh/id_rsa.pub
    - chmod 600 ~/.ssh/id_rsa
    - chmod 600 ~/.ssh/id_rsa.pub

In other words:

  1. We create the key directory .ssh
  2. We echo $ID_RSA and $ID_RSA_PUB in that directory
  3. We set the correct permission to the keys

Starting the daemon

Once we have the keys, we start the SSH daemon and we add the secret key we echoed before with commands:

# Start the ssh daemon and add the private key
    - eval $(ssh-agent -s)
    - ssh-add ~/.ssh/id_rsa
    - rm -rfv ~/.ssh/known_hosts

We also remove, for safety, the known_hosts file. It’s a file that stores known hosts with their public key, this can create problem if you cache the container and you change the key pair for whatever reason.

Mounting a remote dir

SSHFS allows us to mount a remote directory, and it’s very simple with the command 2

# Mount remote dir
    - "sshfs -p 22 [email protected]:/var/www/html /mnt -o StrictHostKeyChecking=no"

We also set the option StrictHostKeyChecking=no, that disables the interactive asking from the ssh daemon to add the public key of the server to our known_host file. Since we cannot interact with the container when it’s running, we need either to disable the feature or we need to manually add the public key of the server to the known_host file. Once we mounted the remote directory we can copy files to the server.

Executing a remote command

Another kind of interaction that we may want to execute on the remote server when deploying is the execution of some command. This is very simple since it suffices to write in the ci script

# Execute a remote command
    - ssh [email protected] "ls"

What if the command requires superuser privileges? We surely cannot do sudo since it requires that we input the password. There is a simple trick that we can exploit by editing the /etc/sudoers file

    %ci ALL=NOPASSWD: /bin/systemctl restart my_service

The line simply allows the user ci to execute the restart service command without input the password when called with sudo.

  1. Remember that you can find your current public key of your local host in .ssh/id_rsa.pub 

  2. Note that the command is between “ because the colon can give some problem when parsed in a yaml configuration file.