Running Ansible from Gitlab CI

Louis De Jaeger
5 min readMar 3, 2022

We’ve been using Ansible for a few years now for the configuration of a few servers, like our Mac mini’s that are being used as Gitlab runners for mobile development. While the Ansible repositories are stored on Gitlab, we’ve been using an AWX instance to run/deploy these Ansible scripts to our hosts.

Ansible

If you haven’t heard about Ansible in the past, it’s an open-source automation tool/platform used for IT tasks such as configuration management, application development, intraservice orchestration and provisioning. As IT systems are becoming more complex and need to scale, automation plays a critical role in DevOps.

Some of the main benefits why we’ve chosen for Ansible are:

  • It’s free (open-source)
  • Ansible is very simple to setup and use
  • Very powerful
  • It provides a lot of flexibility
  • It works agentless, so you don’t need to install anything on the clients

AWX

AWX is basically some web-based user interface on top of Ansible that allows you to schedule Ansible jobs and provide some statstics on the performance of those runs. We noticed our AWX instance needed some updates and over time the AWX server has been replaced by awx-operator by RedHat which needed another way of deployment than the current setup.

Screenshot of the AWX dashboard

The AWX instance is not the easiest to configure and does not have the best UX in my opinion. It takes some time to understand how everything is connected to each other. Additionally, connecting to other tools like the GIT repositories, authentication providers, … is not the most easy integration I’ve done in my career.

So, we ended up on investigating if we could run our Ansible jobs from Gitlab CI/CD, as this could improve the UX and integrations. As another win, we don’t need to maintain another application.

Gitlab CI

After a quick search, it became clear that running Ansible from Gitlab isn’t even that hard, so I went ahead and created a new repository to test the idea.

I copied the Ansible playbooks from our current repository into the new one, and created a very basic gitlab-ci.yml file.

In the example, you see that I’m only using one step, I just run it. By using a docker image that already contains ansible, I don’t really need any additional configuration except for loading our Ansible vault password from the secrets in Gitlab CI.

As this worked, I wanted some additional rules from a security point of view:

  • Code quality/syntax should be checked before running
  • No one should be able to directly deploy changes to our production environment
  • Audit logs of changes should be available

Gitlab Flow

To implement these rules, I’ve created a Gitlab flow where you can have one or more development branches. Once this branch is ready and you want to push these changes to the staging environment, you can do this trough a merge request on Gitlab. Once you want this code into production, you create a new merge request from the master branch to the production branch. This way, we automated the deployment but also have control and auditing through merge requests.

Merge flow

Setup and before_script

In the gitlab-ci.yml file, I firstly define the different stages, the explanation of each stage will become clear along the article. After this, I've add some additional configuration like the docker image I'm using for the deployment.

In the before_script, I installed the ansible-lint package as this is not included in the docker image I’m using (I will create our own image that includes ansible and ansible-lint to make this step smaller soon). After this, the version of ansible and ansible-lint are shown in the output, this can come in handy for troubleshooting.

Verify commits

Each commit that will be made into Gitlab should be syntax error free. So for each commit, a verify pipeline that will run ansible-lint and a ansible-playbook — check will be triggered.

Merge requests to main (staging)

When you’re ready with your change, you can push this to staging trough a merge request. Then again, the verify step will be ran, but also the pre-staging and staging pipelines will be triggered.

With the pre-staging pipeline, an Ansible ping to make sure the host is reachable and online will be performed. If this fails, the pipeline will be cancelled. If the hosts are ok, we can go ahead and run the Ansible playbooks on the hosts and commit the code to master.

💡 You can tick the checkbox to automatically merge to master when the pipeline succeeds when creating the merge request in gitlab.

Merge request to production

When the code is validated on the staging infrastructure, it’s ready to deploy it on the production environment as well. For this step, I’m using the same principle as the development to main branch, you can create a merge request from main to production. The only difference is that Ansible now will use the production inventory of hosts.

Protect master and production branch

It’s important that the master and production branch are protected, so no one can commit directly into these branches. From a security and availability point of view, it’s important that the correct pipelines on the merge requests are runned.

Protected branches in Gitlab

In addition to this, you can setup rules for merge requests to have more control over the approval and changes on your infrastrcture as well.

Conclusion

Running Ansible from Gitlab is pretty easy and allows to have a lot of control on deployment while maintaining audit logs and code quality.

I would recommend using this setup to improve the way you’re automating configuration management with Ansible.

--

--

Louis De Jaeger

I work as Security & Privacy Officer at @itpocket in the IT team. Also working as freelancer in IT and Events. Enthousiastic about tech, infosec and privacy!