Using Jenkins Shared Libraries for Jobs

Jenkins Shared Libraries work great with pipelines, but what about utilizing them for stand alone jobs? Unfortunately there is no easy built-in way to reference your shared library code from a job configuration, but luckily there is an easy workaround.

Use Case

In a typical enterprise environment, you have many data centers that are isolated via firewall rules. This requires us to run any Jenkins job or pipeline on a Jenkins node specific to that environment. So for example, if we had a script that needed to run on a server in a datacenter located in Detroit, we need to make sure that script is executed from a Jenkins node in Detroit, and not, for say, in Chicago.

To accomplish this there is the Groovy Label Assignment Plugin. You can then build a switch statement for server naming conventions to dictate where the job will run. A server/environment parameter is then defined which is passed into this statement to determine the correct Jenkins node to run on. It ends up looking like something like this (where ServerName is the parameter and detroit_node and chicago_node are the node labels):

Although this works, you’ll soon have the same code configured for every job. Any time a new data center is added would require this code to be updated in every job.

Introducing Shared Libraries for Jobs

Assuming you have that same case statement saved in your shared library, how can you call it from the job configuration page? Unlike pipelines, you can’t simply call the function name. However, you simply just need to clone your Shared Library repository somewhere on your master server and wrap it with a couple lines of code. The end result will look like this:

/var/lib/jenkins/JSL/vars/GetNode.Groovy is the path to the script in your Shared Library repo which you cloned to the master.

To ensure that any changes to the source function is refelected immediately, simply update your jenkinsfile on your shared library repo to clone on any push to master.

#!/usr/env/groovy

// Set Branch Name
try{Branch = CHANGE_BRANCH}
catch(e){Branch = BRANCH_NAME}

pipeline {
    
    agent { label '!master' }

// Stages
    stages {
        stage ('Clone JSL') {
            when {
                branch 'master'
            }
            steps {
                // Run job to clone Shared Library repo
                build job: '/sandbox/jsl.clone'
            }
        }
    }
}

A note on DSL

Alternatively you can also use the Job DSL Plugin and keep your entire job configuration stored as code in a github repo. Ultimately that would be preferred solution, but it does come with its own set of challenges outside of configuring jobs via the UI. From my experience I usually use the UI for job configuration for new development, then once the solution is mature, transport it to DSL.