Featured

Using Jenkins Shared Libraries to make better Pipelines

If you haven’t setup a shared library in Jenkins, I highly recommend spending the few minutes it takes to set it up. I was on the fence about it for awhile until I found myself writing the same code (well more like copying and pasting the same code) from different pipelines I’ve written.

One of the benefits of a shared library is you can save this common code in github and reference it in your pipeline. Coming from a PowerShell background this was very similar to creating a module with functions that I could then reference. Having it stored in github is just icing on the cake (collaboration, source control, versioning, etc.)

Getting Started

I won’t go into detail on how to setup the shared library as there are plenty of good blog articles out there already (my favorite: https://tomd.xyz/jenkins-shared-library/). The official documentation isn’t bad either (https://jenkins.io/doc/book/pipeline/shared-libraries/).

Once you have your shared library ready to go you should be able to get started writing code you can reuse in your pipelines. In general each function you write should be its own groovy script with a .txt file along with the same name. I suggest adding a standard suffix to your script names in order to prevent overwriting existing functions with the same name. (for example if you had SlackSend.groovy file, this would overwrite the SlackSend function used with the slack plugin with Jenkins).

What’s the point of the text file?

Although optional, the matching text file can provide some helper information to those unfamiliar with the code. What’s really cool is that this information then shows up as a global variable in Jenkins that a user can reference. Simply add /pipeline-syntax/globals to the end of your pipeline job URL and your function helper information will be viewable.

Working Example

Once I had my shared library setup I wanted to create a function that I could reuse in almost all my pipelines as well as make it easy to illustrate the benefits. I decided to create a mcStartPipeline function that would output useful information about the build to the user. A lot of this information is ultimately stored as global variables, but having everything in one place on the console would prevent a lot of searching and clicking (especially for someone new to Jenkins). Eventually I expanded it to optionally send a slack message to a provided channel indicating that the pipeline was started by a specific user.

mcStartPipeline.groovy

#!/usr/bin/env groovy

def call(String SlackChannel=null) {

    // GET build user
    wrap([$class: 'BuildUser']){BuildUser = BUILD_USER}

    // GET last build result and color correctly
    if (currentBuild.currentResult == 'SUCCESS'){
        LastBuild = '\u001b[32;1mSUCCESS\u001B[0m'
    }
    else if(currentBuild.currentResult == 'FAILURE'){
        LastBuild = '\u001b[31;1mFAILURE\u001B[0m'
    }
    else{
        LastBuild = '\u001b[33;1mUNSTABLE\u001B[0m'
    }

    // GET current date/time
    def now = new Date()
    StartDate = now.format("MM/dd/yy @ HH:mm", TimeZone.getTimeZone('EST'))

    // GET workspace UNC path
    WorkspacePath = "\\\\${env.NODE_NAME}\\${(env.WORKSPACE).replace(':','$')}"

    // Output build info to console
    def BuildInfoString = """
    \u001b[1m | ${currentBuild.projectName} |\u001B[0m
    \u001b[1m_______________________________________________________________\u001B[0m
    \u001b[33;1mName:\u001B[0m \u001b[36;1m${currentBuild.displayName}\u001B[0m
    \u001b[33;1mDescription:\u001B[0m \u001b[36;1m${currentBuild.description}\u001B[0m
    \u001b[33;1mBuild Number:\u001B[0m \u001b[36;1m${currentBuild.number}\u001B[0m
    \u001b[33;1mExecutor:\u001B[0m \u001b[36;1m${BuildUser}\u001B[0m
    \u001b[33;1mLast Build:\u001B[0m ${LastBuild}
    \u001b[33;1mSCM:\u001B[0m \u001b[36;1m${scm.getUserRemoteConfigs()[0].getUrl()}\u001B[0m
    \u001b[33;1mBranch:\u001B[0m \u001b[36;1m${scm.branches[0].name}\u001B[0m
    \u001b[33;1mJenkins Node:\u001B[0m \u001b[36;1m${env.NODE_NAME}\u001B[0m
    \u001b[33;1mWorkspace:\u001B[0m \u001b[36;1m${WorkspacePath}\u001B[0m
    \u001b[33;1mStarted:\u001B[0m \u001b[36;1m${StartDate}\u001B[0m
    \u001b[33;1mEstimated Duration:\u001B[0m \u001b[36;1m${currentBuild.durationString}\u001B[0m
    \u001b[33;1mParameters:\u001B[0m \u001b[36;1m${params}\u001B[0m
    \u001b[1m_______________________________________________________________\u001B[0m
    """
    
    println BuildInfoString

    // Send slack message if slackChannel provided
    if (SlackChannel){
        if(BuildUser){
            SlackResponse = slackSend (channel: SlackChannel, color: '#3366ff', message: "Pipeline Started (_Triggered by ${BuildUser}_)")
        }
        else{
            SlackResponse = slackSend (channel: SlackChannel, color: '#3366ff', message: "Pipeline Started (_Triggered by SYSTEM_)")
        }
        return SlackResponse
    }
}

Why all the escape codes?

These are ansi color codes that provide color to the Jenkins output console (when specified in pipeline options, see jsl.pipeline.groovy below). See this guide for more info: http://www.lihaoyi.com/post/BuildyourownCommandLinewithANSIescapecodes.html

mcStartPipeline.txt

Adds basic information about pipeline to console + (optionally) sends slack message to indicated channel that pipeline was started.

jsl.pipeline.groovy (sample pipeline calling mcStartPipeline)

#!/usr/env/groovy

// @Library('mcJSL')_
currentBuild.displayName = "Super Cool Test Pipeline"
currentBuild.description = "This is a test pipeline...I don't know what else to say."

pipeline {

    agent { label 'windows' }

    options {
        //skipDefaultCheckout(true)
        ansiColor('xterm')
    }

// Stages
    stages {
        stage('Startup') {
            steps {
                script{
                    def SlackResponse = mcStartPipeline('jenkins_sandbox')
                    slackSend (channel: SlackResponse.threadId, color: '#3366ff', message: "Responding to the thread...")
                }           
            }
        }
    }
}

Now let’s see it in action when running the pipeline in Jenkins…

The optional slack message

Conclusion

Although this may be a simple example, hopefully it proves the power of shared libraries and illustrates the benefits of being able to reuse code across your pipelines. Eventually I hope to expand our shared library to help standardize all our pipelines for a consistent and reliable experience.

Using Jenkins Shared Libraries for Jobs

Jenkins Shared Libraries work great with pipelines, but what about utilizing them for stand alone jobs? Unfortunately there is no easy built-in way to reference your shared library code from a job configuration, but luckily there is an easy workaround.

Use Case

In a typical enterprise environment, you have many data centers that are isolated via firewall rules. This requires us to run any Jenkins job or pipeline on a Jenkins node specific to that environment. So for example, if we had a script that needed to run on a server in a datacenter located in Detroit, we need to make sure that script is executed from a Jenkins node in Detroit, and not, for say, in Chicago.

To accomplish this there is the Groovy Label Assignment Plugin. You can then build a switch statement for server naming conventions to dictate where the job will run. A server/environment parameter is then defined which is passed into this statement to determine the correct Jenkins node to run on. It ends up looking like something like this (where ServerName is the parameter and detroit_node and chicago_node are the node labels):

Although this works, you’ll soon have the same code configured for every job. Any time a new data center is added would require this code to be updated in every job.

Introducing Shared Libraries for Jobs

Assuming you have that same case statement saved in your shared library, how can you call it from the job configuration page? Unlike pipelines, you can’t simply call the function name. However, you simply just need to clone your Shared Library repository somewhere on your master server and wrap it with a couple lines of code. The end result will look like this:

/var/lib/jenkins/JSL/vars/GetNode.Groovy is the path to the script in your Shared Library repo which you cloned to the master.

To ensure that any changes to the source function is refelected immediately, simply update your jenkinsfile on your shared library repo to clone on any push to master.

#!/usr/env/groovy

// Set Branch Name
try{Branch = CHANGE_BRANCH}
catch(e){Branch = BRANCH_NAME}

pipeline {
    
    agent { label '!master' }

// Stages
    stages {
        stage ('Clone JSL') {
            when {
                branch 'master'
            }
            steps {
                // Run job to clone Shared Library repo
                build job: '/sandbox/jsl.clone'
            }
        }
    }
}

A note on DSL

Alternatively you can also use the Job DSL Plugin and keep your entire job configuration stored as code in a github repo. Ultimately that would be preferred solution, but it does come with its own set of challenges outside of configuring jobs via the UI. From my experience I usually use the UI for job configuration for new development, then once the solution is mature, transport it to DSL.