Jenkins » History » Version 27

« Previous - Version 27/53 (diff) - Next » - Current version
Dominic Cleal, 06/02/2015 10:31 AM


The Foreman project maintains its own Jenkins instance for continuous integration at

Managing jobs

Jenkins itself is deployed onto one master VM from foreman-infra. Jobs are maintained in two ways:

  1. Updated by hand - some developers have accounts to log in and modify jobs
  2. Jenkins Job Builder - a set of YAML configuration files in foreman-infra that generate jobs

New jobs should be written via Jenkins Job Builder if possible.

Jenkins Job Builder

Jenkins Job Builder (JJB) is an OpenStack tool for generating Jenkins job definitions (an XML file) from a set of YAML job descriptions, which we store in version control.

Puppet deploys these onto our Jenkins server (a recursive file copy) and when they change, it runs the JJB tool to update the jobs in the live instance. It also refreshes them daily to overwrite manual changes.

We use a fork of JJB with some extra support for plugins we use:

To test it:

  1. check out the above JJB repo and local branch
  2. recommended: create a Python virtualenv and activate it
    1. virtualenv jjb
    2. source jjb/bin/activate
  3. easy_install .
  4. check out foreman-infra
  5. cd foreman-infra/puppet/modules/jenkins_job_builder/files
  6. jenkins-jobs -l debug test -r . -o /tmp/jobs

JJB will populate /tmp/jobs with the proposed XML files.

Useful resources:

  1. Job definitions, templates etc.
  2. Modules, e.g. SCM, publishers, builders

Pull request testing

Core Foreman projects have GitHub pull requests tested on our Jenkins instance so it's identical to the way we test the primary development branches themselves. Less significant projects (such as installer submodules) may use Travis CI.

PR jobs

Every project that needs PR testing requires two Jenkins jobs. Taking core Foreman as an example:

The results from these PR jobs are only stored for a few weeks, sufficient for reviews.

The jobs are being transitioned at the moment from an old style, which used basic git apply + patches to a proper merge of the branch under test to the primary project branch, which is far more reliable.

Old style (.patch)

The PR test job takes the PR number parameter, downloads the patch from GitHub (by appending a .patch extension to the PR URL), applies it to the local checkout of the project and then builds as normal. This process means PRs are effectively rebased onto the current development branch before tests are run, rather than testing the branch as-is. GitHub tracks "mergeability" so we don't test PRs that can't be merged cleanly.

New style (git URL/ref)

The PR test job takes the following parameters:

  • pr_git_url: URL to the repository containing the PR to clone
  • pr_git_ref: branch name in the above repository for the PR
  • pr_number: optional, for informational purposes only (e.g. tracking from a job back to a PR)

Under source code management in the job configuration, set up:

  1. Project git repository:
    • Repository URL:
  2. Branches to build: develop
  3. Additional Behaviours > Prune stale remote-tracking branches
  4. Additional Behaviours > Wipe out repository & force clone

As the first build step, add:

#!/bin/bash -ex
if [ -n "${pr_git_url}" ]; then
  git remote add pr ${pr_git_url}
  git fetch pr
  git merge pr/${pr_git_ref}

PR scanner

To initiate the PR tests, the test-pull-requests script is used to scan for open PRs, check they're mergeable and then trigger the Jenkins job. The script is a fork from the OpenShift upstream, enhanced in a few areas including changing from comment-based updates to the GitHub status API.

The script runs under the pull_request_scanner job under Jenkins and is set to run a few times every hour. It scans all configured projects for PRs and then exits, leaving the PR test jobs themselves to execute asynchronously.

The configuration files are deployed via our Puppet infrastructure for each project, and mostly just detail the GitHub repos, branches and Jenkins job names. These are managed in the slave foreman-infra module in slave itself and templates/.

After a PR test job completes, the Jenkins jobs are configured to build the PR scanner job again, which means immediately after the PR test results come in, the PR scanner script is able to update the status on the GitHub PR. A kind of feedback loop, if you will.

PR scanner repo hooks

In addition to regular scheduled runs of the PR scanner, hooks are added to the GitHub repositories to kick the PR scanner "build" when a PR is opened or synchronised.

They currently point to a very simple sinatra app running on OpenShift, which reads in the source repo from the hook payload and then runs the PR scanner build remotely via a POST request to Jenkins. This means PRs begin building within a minute or two of the PR being opened, giving better feedback to developers.

The app is prprocessor, hosted on the project's OpenShift account and the source is available here: (please keep this in sync with OpenShift's repo)

Adding a new generic project

For a project "foo":

  1. create a test_foo job that tests the primary development branch, enable IRC notifications in post build
  2. clone to test_foo_pull_request
    • without IRC notifications
    • add pr_number parameter
    • add initial build step to download the patch and apply
  3. add template to foreman-infra/puppet/modules/slave/templates, update branch, job names etc.
  4. add project to slave::pr_test_config list in foreman-infra/puppet/modules/slave/manifests/init.pp
  5. add pull_request hook to GitHub repo

Foreman plugin testing

Foreman plugins are tested by adding the plugin to a Foreman checkout and running core tests, so it checks that existing behaviours still work and new plugin tests are run too. The test_plugin_matrix job copies the core jobs, but adds a plugin from a given git repo/branch and is usually used to test plugins in a generic way.

Each plugin should have a job defined in JJB that calls test_plugin_matrix here:

Foreman plugin PR testing

To test pull requests, a separate job is used that also takes the PR details:

To set this up for a plugin, make the template + manifest addition as noted in the PR section below to configure the PR scanner. Also enable the prprocessor webhook on the GitHub repo for immediate builds.

Adding a new Foreman plugin

For a plugin "foreman_example", first create a job that tests the main (master or develop) branch.

  1. ensure plugin tests run when rake jenkins:unit is called, see the example plugin and testing a plugin for help
  2. create a foreman_example.yaml file in foreman-infra/JJB
  3. ensure the job is green by fixing bugs, installing dependencies etc.

Next, set up PR testing for the plugin:

  1. add template to foreman-infra/puppet/modules/slave/templates, update branch, job names etc.
  2. add project to slave::pr_test_config list in foreman-infra/puppet/modules/slave/manifests/init.pp
  3. add pull_request hook to GitHub repo

System testing with foreman-bats

Some system tests are performed on the complete all-in-one Foreman setup, which includes packages, the installer, the CLI and related components.

These tests are currently in the foreman-bats project and use the BATS test framework (based on bash).

They are intended to be smoke tests only, not in depth testing of any component of the stack. Most components have their own unit tests, which are cheaper to execute and are run closer to where the code is developed, reducing the turnaround time for fixes.

systest jobs

The systest_* jobs in Jenkins run the system tests, with systest_foreman being the main parameterised job for executing them.

For RPM-based distributions, the systest_foreman job is run from the packaging_publish_rpm job as an intermediate test phase between the repo being generated and published to the public web site.

For Debian distributions, the systest_foreman_debian job runs nightly against the published repos to report problems. This is a matrix job that runs once for each Debian-based distribution.

Vagrant setup

foreman-bats is very simple - it's a bash script that executes as root on a host, installs Foreman and tests the result. Since we don't want to install Foreman directly on slaves, this is run on the Rackspace public cloud, under a project account.

The Jenkins jobs use Vagrant to create hosts on Rackspace and to run the foreman-bats project on it. Vagrant is installed via the foreman-infra Puppet modules along with the vagrant-rackspace plugin. This launches a standard Rackspace image for the OS under test, Vagrant rsyncs the current directory (workspace) to it containing foreman-bats, and the job script then executes the foreman-bats test over vagrant ssh.

The same Vagrant setup can be used to run foreman-bats locally via vagrant-libvirt or other plugins.


Configuration management

All slaves are maintained through our own Foreman instance using Puppet. The Foreman instance has a host group called "Base/Builders" and "Base/Builders/Red Hat" and "Base/Builders/Debian" which have the "slave" and other classes assigned to them. contains the source for all Puppet modules.

Slave requirements

  • CentOS 6 currently (or 7 once we've tested and updated our Puppet modules)
    • Clean, minimal base installation or the option to reinstall it
  • 1GB of RAM per vCPU (4 vCPU + 4GB RAM is typical)
  • 40GB disk (minimum), SSD preferred
  • ~20GB/month bandwidth
  • Public facing IP address
  • Root access

Configuring a new slave

If using Rackspace, start the new server via the web UI or the rumm CLI utility. Ensure you select:

  • Image: CentOS 6.5
  • Flavor: Performance 1 (4GB)

Set up the data partition for the Jenkins workspace:

  1. mkfs.ext4 -L data /dev/xvde1
  2. echo "LABEL=data /var/lib/workspace ext4 defaults,noatime 1 2" >> /etc/fstab
  3. mkdir /var/lib/workspace && mount /var/lib/workspace


  1. Ensure EPEL is configured: epel-release
  2. Ensure is configured: puppetlabs-release
  3. yum -y install puppet
  4. echo "server =" >> /etc/puppet/puppet.conf
  5. puppet agent -t
  6. Sign the certificate on the puppetmaster or via Foreman
  7. puppet agent -t
  8. Set the host group to "Base/Builders/Red Hat" in Foreman
  9. Run puppet agent -t twice (second run is important, due to the rvm module behaviour)
  10. Add the node by IP address to Jenkins, copying an existing slave (except slave01)