What is Jenkins?

What is Jenkins?Aisalkyn Aidarova

Jenkins is a self-contained, open source automation server which can be used to automate all sorts of...

Jenkins is a self-contained, open source automation server which can be used to automate all sorts of tasks related to building, testing, and delivering or deploying software.

Jenkins can be installed through native system packages, Docker, or even run standalone by any machine with a Java Runtime Environment (JRE) installed.
Automate job definition
Jenkins has the ability to automatically create, update, and delete jobs based on the repositories it identifies in your software configuration management system. Leverage this feature and optimize your job management by structuring your job definitions in a way that maximizes the benefits offered by Jenkins' automatic job management capabilities.

There are multiple alternatives for automatic job management, including:

Use organization folders - create, update, and delete multibranch Pipeline folders and Pipeline jobs automatically (preferred)

Use multibranch Pipelines - create, update and delete Pipeline jobs automatically

Use Pipeline - Manually defined Pipeline jobs for more control over the job management process.
Working with projects
Table of Contents
Copying projects
Renaming projects
Moving projects
Jenkins uses projects (also known as "jobs") to perform its work. Projects are defined and run by Jenkins users. Jenkins offers several different types of projects, including:

Pipeline

Multibranch Pipeline

Organization folders

Freestyle

Multi-configuration (matrix)

Maven

External job

What is Jenkins Pipeline?
Jenkins Pipeline (or simply "Pipeline" with a capital "P") is a suite of plugins which supports implementing and integrating continuous delivery pipelines into Jenkins.

A continuous delivery (CD) pipeline is an automated expression of your process for getting software from version control right through to your users and customers. Every change to your software (committed in source control) goes through a complex process on its way to being released. This process involves building the software in a reliable and repeatable manner, as well as progressing the built software (called a "build") through multiple stages of testing and deployment.

Pipeline provides an extensible set of tools for modeling simple-to-complex delivery pipelines "as code" via the Pipeline domain-specific language (DSL) syntax. [1]

The definition of a Jenkins Pipeline is written into a text file (called a Jenkinsfile) which in turn can be committed to a project’s source control repository. [2] This is the foundation of "Pipeline-as-code"; treating the CD pipeline as a part of the application to be versioned and reviewed like any other code.

Creating a Jenkinsfile and committing it to source control provides a number of immediate benefits:

Automatically creates a Pipeline build process for all branches and pull requests.

Code review/iteration on the Pipeline (along with the remaining source code).

Audit trail for the Pipeline.

Single source of truth [3] for the Pipeline, which can be viewed and edited by multiple members of the project.

While the syntax for defining a Pipeline, either in the web UI or with a Jenkinsfile is the same, it is generally considered best practice to define the Pipeline in a Jenkinsfile and check that in to source control.

Declarative versus Scripted Pipeline syntax
A Jenkinsfile can be written using two types of syntax — Declarative and Scripted.

Declarative and Scripted Pipelines are constructed fundamentally differently. Declarative Pipeline is designed to make writing and reading Pipeline code easier, and provides richer syntactical features over Scripted Pipeline syntax.

Many of the individual syntactical components (or "steps") written into a Jenkinsfile, however, are common to both Declarative and Scripted Pipeline. Read more about how these two types of syntax differ in Pipeline concepts and Pipeline syntax overview below.

Why Pipeline?
Jenkins is, fundamentally, an automation engine which supports a number of automation patterns. Pipeline adds a powerful set of automation tools onto Jenkins, supporting use cases that span from simple continuous integration to comprehensive CD pipelines. By modeling a series of related tasks, users can take advantage of the many features of Pipeline:

Code: Pipelines are implemented in code and typically checked into source control, giving teams the ability to edit, review, and iterate upon their delivery pipeline.

Durable: Pipelines can survive both planned and unplanned restarts of the Jenkins controller.

Pausable: Pipelines can optionally stop and wait for human input or approval before continuing the Pipeline run.

Versatile: Pipelines support complex real-world CD requirements, including the ability to fork/join, loop, and perform work in parallel.

Extensible: The Pipeline plugin supports custom extensions to its DSL [1] and multiple options for integration with other plugins.

While Jenkins has always allowed rudimentary forms of chaining Freestyle Jobs together to perform sequential tasks, [4] Pipeline makes this concept a first-class citizen in Jenkins.

Pipeline concepts
The following concepts are key aspects of Jenkins Pipeline, which tie in closely to Pipeline syntax (refer to the overview below).

Pipeline
A Pipeline is a user-defined model of a CD pipeline. A Pipeline’s code defines your entire build process, which typically includes stages for building an application, testing it and then delivering it.

Also, a pipeline block is a key part of Declarative Pipeline syntax.

Node
A node is a machine which is part of the Jenkins environment and is capable of executing a Pipeline.

Also, a node block is a key part of Scripted Pipeline syntax.

Stage
A stage block defines a conceptually distinct subset of tasks performed through the entire Pipeline (e.g. "Build", "Test" and "Deploy" stages), which is used by many plugins to visualize or present Jenkins Pipeline status/progress. [6]

Step
A single task. Fundamentally, a step tells Jenkins what to do at a particular point in time (or "step" in the process). For example, to execute the shell command make, use the sh step: sh 'make'. When a plugin extends the Pipeline DSL, [1] that typically means the plugin has implemented a new step.

Pipeline syntax overview
The following Pipeline code skeletons illustrate the fundamental differences between Declarative Pipeline syntax and Scripted Pipeline syntax.

Be aware that both stages and steps (above) are common elements of both Declarative and Scripted Pipeline syntax.

Declarative Pipeline fundamentals
In Declarative Pipeline syntax, the pipeline block defines all the work done throughout your entire Pipeline.

Jenkinsfile (Declarative Pipeline)
pipeline {
agent any
stages {
stage('Build') {
steps {
//
}
}
stage('Test') {
steps {
//
}
}
stage('Deploy') {
steps {
//
}
}
}
}
Execute this Pipeline or any of its stages, on any available agent.
Defines the "Build" stage.
Perform some steps related to the "Build" stage.
Defines the "Test" stage.
Perform some steps related to the "Test" stage.
Defines the "Deploy" stage.
Perform some steps related to the "Deploy" stage.
Scripted Pipeline fundamentals
In Scripted Pipeline syntax, one or more node blocks do the core work throughout the entire Pipeline. Although this is not a mandatory requirement of Scripted Pipeline syntax, confining your Pipeline’s work inside of a node block does two things:

Schedules the steps contained within the block to run by adding an item to the Jenkins queue. As soon as an executor is free on a node, the steps will run.

Creates a workspace (a directory specific to that particular Pipeline) where work can be done on files checked out from source control.
Caution: Depending on your Jenkins configuration, some workspaces may not get automatically cleaned up after a period of inactivity. Refer to the tickets and discussion linked from JENKINS-2111 for more information.

Jenkinsfile (Scripted Pipeline)
node {

stage('Build') {
//
}
stage('Test') {
//
}
stage('Deploy') {
//
}
}
Execute this Pipeline or any of its stages, on any available agent.
Defines the "Build" stage. stage blocks are optional in Scripted Pipeline syntax. However, implementing stage blocks in a Scripted Pipeline provides clearer visualization of each stage's subset of tasks/steps in the Jenkins UI.
Perform some steps related to the "Build" stage.
Defines the "Test" stage.
Perform some steps related to the "Test" stage.
Defines the "Deploy" stage.
Perform some steps related to the "Deploy" stage.
Pipeline example
Here is an example of a Jenkinsfile using Declarative Pipeline syntax — its Scripted syntax equivalent can be accessed by clicking the Toggle Scripted Pipeline link below:

Jenkinsfile (Declarative Pipeline)
pipeline {
agent any
options {
skipStagesAfterUnstable()
}
stages {
stage('Build') {
steps {
sh 'make'
}
}
stage('Test'){
steps {
sh 'make check'
junit 'reports/*/.xml'
}
}
stage('Deploy') {
steps {
sh 'make publish' //
}
}
}
}
Toggle Scripted Pipeline (Advanced)
pipeline is Declarative Pipeline-specific syntax that defines a "block" containing all content and instructions for executing the entire Pipeline.
agent is Declarative Pipeline-specific syntax that instructs Jenkins to allocate an executor (on a node) and workspace for the entire Pipeline.
stage is a syntax block that describes a stage of this Pipeline. Read more about stage blocks in Declarative Pipeline syntax on the Pipeline syntax page. As mentioned above, stage blocks are optional in Scripted Pipeline syntax.
steps is Declarative Pipeline-specific syntax that describes the steps to be run in this stage.
sh is a Pipeline step (provided by the Pipeline: Nodes and Processes plugin) that executes the given shell command.
junit is another Pipeline step (provided by the JUnit plugin) for aggregating test reports.
Read more about Pipeline syntax on the Pipeline Syntax page.

  1. Domain-specific language
  2. Source control management
  3. Single source of truth
  4. Additional plugins have been used to implement complex behaviors utilizing Freestyle Jobs such as the Copy Artifact, Parameterized Trigger, and Promoted Builds plugins.

Creating a Jenkinsfile
As discussed in the Defining a Pipeline in SCM, a Jenkinsfile is a text file that contains the definition of a Jenkins Pipeline and is checked into source control. Consider the following Pipeline which implements a basic three-stage continuous delivery pipeline.

Jenkinsfile (Declarative Pipeline)
pipeline {
agent any

stages {
    stage('Build') {
        steps {
            echo 'Building..'
        }
    }
    stage('Test') {
        steps {
            echo 'Testing..'
        }
    }
    stage('Deploy') {
        steps {
            echo 'Deploying....'
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

}
Toggle Scripted Pipeline (Advanced)
Not all Pipelines will have these same three stages, but it is a good starting point to define them for most projects. The sections below will demonstrate the creation and execution of a simple Pipeline in a test installation of Jenkins.

It is assumed that there is already a source control repository set up for the project and a Pipeline has been defined in Jenkins following these instructions.

Using a text editor, ideally one which supports Groovy syntax highlighting, create a new Jenkinsfile in the root directory of the project.

The Declarative Pipeline example above contains the minimum necessary structure to implement a continuous delivery pipeline. The agent directive, which is required, instructs Jenkins to allocate an executor and workspace for the Pipeline. Without an agent directive, not only is the Declarative Pipeline not valid, it would not be capable of doing any work! By default the agent directive ensures that the source repository is checked out and made available for steps in the subsequent stages.

The stages directive and steps directives are also required for a valid Declarative Pipeline as they instruct Jenkins what to execute and in which stage it should be executed.

For more advanced usage with Scripted Pipeline, the example above node is a crucial first step as it allocates an executor and workspace for the Pipeline. In essence, without node, a Pipeline cannot do any work! From within node, the first order of business will be to checkout the source code for this project. Since the Jenkinsfile is being pulled directly from source control, Pipeline provides a quick and easy way to access the right revision of the source code.

Jenkinsfile (Scripted Pipeline)
node {
checkout scm
/* .. snip .. */
}
The checkout step will checkout code from source control; scm is a special variable which instructs the checkout step to clone the specific revision which triggered this Pipeline run.
Build
For many projects the beginning of "work" in the Pipeline would be the "build" stage. Typically this stage of the Pipeline will be where source code is assembled, compiled, or packaged. The Jenkinsfile is not a replacement for an existing build tool such as GNU/Make, Maven, Gradle, or others, but rather can be viewed as a glue layer to bind the multiple phases of a project’s development lifecycle (build, test, deploy) together.

Jenkins has a number of plugins for invoking practically any build tool in general use, but this example will simply invoke make from a shell step (sh). The sh step assumes the system is Unix/Linux-based, for Windows-based systems the bat could be used instead.

Jenkinsfile (Declarative Pipeline)
pipeline {
agent any

stages {
    stage('Build') {
        steps {
            sh 'make' 
            archiveArtifacts artifacts: '**/target/*.jar', fingerprint: true 
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

}
Toggle Scripted Pipeline (Advanced)
The sh step invokes the make command and will only continue if a zero exit code is returned by the command. Any non-zero exit code will fail the Pipeline.
archiveArtifacts captures the files built matching the include pattern (*/target/.jar) and saves them to the Jenkins controller for later retrieval.
Archiving artifacts is not a substitute for using external artifact repositories such as Artifactory or Nexus and should be considered only for basic reporting and file archival.

Test
Running automated tests is a crucial component of any successful continuous delivery process. As such, Jenkins has a number of test recording, reporting, and visualization facilities provided by a number of plugins. At a fundamental level, when there are test failures, it is useful to have Jenkins record the failures for reporting and visualization in the web UI. The example below uses the junit step, provided by the JUnit plugin.

In the example below, if tests fail, the Pipeline is marked "unstable", as denoted by a yellow ball in the web UI. Based on the recorded test reports, Jenkins can also provide historical trend analysis and visualization.

Jenkinsfile (Declarative Pipeline)
pipeline {
agent any

stages {
    stage('Test') {
        steps {
            /* `make check` returns non-zero on test failures,
            * using `true` to allow the Pipeline to continue nonetheless
            */
            sh 'make check || true' 
            junit '**/target/*.xml' 
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

}
Toggle Scripted Pipeline (Advanced)
Using an inline shell conditional (sh 'make check || true') ensures that the sh step always sees a zero exit code, giving the junit step the opportunity to capture and process the test reports. Alternative approaches to this are covered in more detail in the Handling failure section below.
junit captures and associates the JUnit XML files matching the inclusion pattern (*/target/.xml).
Deploy
Deployment can imply a variety of steps, depending on the project or organization requirements, and may be anything from publishing built artifacts to an Artifactory server, to pushing code to a production system.

At this stage of the example Pipeline, both the "Build" and "Test" stages have successfully executed. In essence, the "Deploy" stage will only execute assuming previous stages completed successfully, otherwise the Pipeline would have exited early.

Jenkinsfile (Declarative Pipeline)
pipeline {
agent any

stages {
    stage('Deploy') {
        when {
          expression {
            currentBuild.result == null || currentBuild.result == 'SUCCESS' 
          }
        }
        steps {
            sh 'make publish'
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

}
Toggle Scripted Pipeline (Advanced)
Accessing the currentBuild.result variable allows the Pipeline to determine if there were any test failures. In which case, the value would be UNSTABLE.
Assuming everything has executed successfully in the example Jenkins Pipeline, each successful Pipeline run will have associated build artifacts archived, test results reported upon and the full console output all in Jenkins.

A Scripted Pipeline can include conditional tests (shown above), loops, try/catch/finally blocks, and even functions. The next section will cover this advanced Scripted Pipeline syntax in more detail.

Working with your Jenkinsfile
The following sections provide details about handling:

specific Pipeline syntax in your Jenkinsfile and

features and functionality of Pipeline syntax which are essential in building your application or Pipeline project.

Using environment variables
Jenkins Pipeline exposes environment variables via the global variable env, which is available from anywhere within a Jenkinsfile. The full list of environment variables accessible from within Jenkins Pipeline is documented at ${YOUR_JENKINS_URL}/pipeline-syntax/globals#env and includes:

BUILD_ID
The current build ID, identical to BUILD_NUMBER for builds created in Jenkins versions 1.597+.

BUILD_NUMBER
The current build number, such as "153".

BUILD_TAG
String of jenkins-${JOB_NAME}-${BUILD_NUMBER}. Convenient to put into a resource file, a jar file, etc for easier identification.

BUILD_URL
The URL where the results of this build can be found (for example, http://buildserver/jenkins/job/MyJobName/17/).

EXECUTOR_NUMBER
The unique number that identifies the current executor (among executors of the same machine) performing this build. This is the number you see in the "build executor status", except that the number starts from 0, not 1.

JAVA_HOME
If your job is configured to use a specific JDK, this variable is set to the JAVA_HOME of the specified JDK. When this variable is set, PATH is also updated to include the bin subdirectory of JAVA_HOME.

JENKINS_URL
Full URL of Jenkins, such as https://example.com:port/jenkins/ (NOTE: only available if Jenkins URL set in "System Configuration").

JOB_NAME
Name of the project of this build, such as "foo" or "foo/bar".

NODE_NAME
The name of the node the current build is running on. Set to 'master' for the Jenkins controller.

WORKSPACE
The absolute path of the workspace.

Referencing or using these environment variables can be accomplished like accessing any key in a Groovy Map, for example:

Jenkinsfile (Declarative Pipeline)
pipeline {
agent any
stages {
stage('Example') {
steps {
echo "Running ${env.BUILD_ID} on ${env.JENKINS_URL}"
}
}
}
}
Toggle Scripted Pipeline (Advanced)
Setting environment variables
Setting an environment variable within a Jenkins Pipeline is accomplished differently depending on whether Declarative or Scripted Pipeline is used.

Declarative Pipeline supports an environment directive, whereas users of Scripted Pipeline must use the withEnv step.

Jenkinsfile (Declarative Pipeline)
pipeline {
agent any
environment {
CC = 'clang'
}
stages {
stage('Example') {
environment {
DEBUG_FLAGS = '-g'
}
steps {
sh 'printenv'
}
}
}
}
Toggle Scripted Pipeline (Advanced)
An environment directive used in the top-level pipeline block will apply to all steps within the Pipeline.
An environment directive defined within a stage will only apply the given environment variables to steps within the stage.
Setting environment variables dynamically
Environment variables can be set at run time and can be used by shell scripts (sh), Windows batch scripts (bat) and PowerShell scripts (powershell). Each script can either returnStatus or returnStdout. More information on scripts.

Below is an example in a declarative pipeline using sh (shell) with both returnStatus and returnStdout.

Jenkinsfile (Declarative Pipeline)
pipeline {
agent any
environment {
// Using returnStdout
CC = """${sh(
returnStdout: true,
script: 'echo "clang"'
)}"""
// Using returnStatus
EXIT_STATUS = """${sh(
returnStatus: true,
script: 'exit 1'
)}"""
}
stages {
stage('Example') {
environment {
DEBUG_FLAGS = '-g'
}
steps {
sh 'printenv'
}
}
}
}
An agent must be set at the top level of the pipeline. This will fail if agent is set as agent none.
When using returnStdout a trailing whitespace will be appended to the returned string. Use .trim() to remove this.
Handling credentials
Credentials configured in Jenkins can be handled in Pipelines for immediate use. Read more about using credentials in Jenkins on the Using credentials page.
For secret text, usernames and passwords, and secret files
Jenkins' declarative Pipeline syntax has the credentials() helper method (used within the environment directive) which supports secret text, username and password, as well as secret file credentials. If you want to handle other types of credentials, refer to the For other credential types section.

Secret text
The following Pipeline code shows an example of how to create a Pipeline using environment variables for secret text credentials.

In this example, two secret text credentials are assigned to separate environment variables to access Amazon Web Services (AWS). These credentials would have been configured in Jenkins with their respective credential IDs jenkins-aws-secret-key-id and jenkins-aws-secret-access-key.

Jenkinsfile (Declarative Pipeline)
pipeline {
agent {
// Define agent details here
}
environment {
AWS_ACCESS_KEY_ID = credentials('jenkins-aws-secret-key-id')
AWS_SECRET_ACCESS_KEY = credentials('jenkins-aws-secret-access-key')
}
stages {
stage('Example stage 1') {
steps {
//
}
}
stage('Example stage 2') {
steps {
//
}
}
}
}
You can reference the two credential environment variables (defined in this Pipeline’s environment directive), within this stage’s steps using the syntax $AWS_ACCESS_KEY_ID and $AWS_SECRET_ACCESS_KEY. For example, here you can authenticate to AWS using the secret text credentials assigned to these credential variables. To maintain the security and anonymity of these credentials, if the job displays the value of these credential variables from within the Pipeline (such as echo $AWS_SECRET_ACCESS_KEY), Jenkins only returns the value “*” to reduce the risk of secret information being disclosed to the console output and any logs. Any sensitive information in credential IDs themselves (such as usernames) are also returned as “*” in the Pipeline run’s output. This only reduces the risk of accidental exposure. It does not prevent a malicious user from capturing the credential value by other means. A Pipeline that uses credentials can also disclose those credentials. Don’t allow untrusted Pipeline jobs to use trusted credentials.
In this Pipeline example, the credentials assigned to the two AWS_…​ environment variables are scoped globally for the entire Pipeline, so these credential variables could also be used in this stage’s steps. If, however, the environment directive in this Pipeline were moved to a specific stage (as is the case in the Usernames and passwords Pipeline example below), then these AWS_…​ environment variables would only be scoped to the steps in that stage.
Storing static AWS keys in Jenkins credentials is not very secure. If you can run Jenkins itself in AWS (at least the agent), it is preferable to use IAM roles for a computer or EKS service account. It is also possible to use web identity federation.
Usernames and passwords
The following Pipeline code snippets show an example of how to create a Pipeline using environment variables for username and password credentials.

In this example, username and password credentials are assigned to environment variables to access a Bitbucket repository in a common account or team for your organization; these credentials would have been configured in Jenkins with the credential ID jenkins-bitbucket-common-creds.

When setting the credential environment variable in the environment directive:

environment {
BITBUCKET_COMMON_CREDS = credentials('jenkins-bitbucket-common-creds')
}
this actually sets the following three environment variables:

BITBUCKET_COMMON_CREDS - contains a username and a password separated by a colon in the format username:password.

BITBUCKET_COMMON_CREDS_USR - an additional variable containing the username component only.

BITBUCKET_COMMON_CREDS_PSW - an additional variable containing the password component only.

By convention, variable names for environment variables are typically specified in capital case, with individual words separated by underscores You can, however, specify any legitimate variable name using lower case characters. Bear in mind that the additional environment variables created by the credentials() method (above) will always be appended with _USR and _PSW (i.e. in the format of an underscore followed by three capital letters).

The following code snippet shows the example Pipeline in its entirety:

Jenkinsfile (Declarative Pipeline)
pipeline {
agent {
// Define agent details here
}
stages {
stage('Example stage 1') {
environment {
BITBUCKET_COMMON_CREDS = credentials('jenkins-bitbucket-common-creds')
}
steps {
//
}
}
stage('Example stage 2') {
steps {
//
}
}
}
}
The following credential environment variables (defined in this Pipeline’s environment directive) are available within this stage’s steps and can be referenced using the syntax:
$BITBUCKET_COMMON_CREDS

$BITBUCKET_COMMON_CREDS_USR

$BITBUCKET_COMMON_CREDS_PSW

For example, here you can authenticate to Bitbucket with the username and password assigned to these credential variables. To maintain the security and anonymity of these credentials, if the job displays the value of these credential variables from within the Pipeline the same behavior described in the Secret text example above applies to these username and password credential variable types too. This only reduces the risk of accidental exposure. It does not prevent a malicious user from capturing the credential value by other means. A Pipeline that uses credentials can also disclose those credentials. Don’t allow untrusted Pipeline jobs to use trusted credentials.

In this Pipeline example, the credentials assigned to the three BITBUCKET_COMMON_CREDS…​ environment variables are scoped only to Example stage 1, so these credential variables are not available for use in this Example stage 2 stage’s steps. If, however, the environment directive in this Pipeline were moved immediately within the pipeline block (as is the case in the Secret text Pipeline example above), then these BITBUCKET_COMMON_CREDS…​ environment variables would be scoped globally and could be used in any stage’s steps.
Secret files
A secret file is a credential which is stored in a file and uploaded to Jenkins. Secret files are used for credentials that are:

too unwieldy to enter directly into Jenkins, and/or

in binary format, such as a GPG file.

In this example, we use a Kubernetes config file that has been configured as a secret file credential named my-kubeconfig.

Jenkinsfile (Declarative Pipeline)
pipeline {
agent {
// Define agent details here
}
environment {
// The MY_KUBECONFIG environment variable will be assigned the value of a temporary file.
// For example:
// /home/user/.jenkins/workspace/cred_test@tmp/secretFiles/546a5cf3-9b56-4165-a0fd-19e2afe6b31f/kubeconfig.txt
MY_KUBECONFIG = credentials('my-kubeconfig')
}
stages {
stage('Example stage 1') {
steps {
sh("kubectl --kubeconfig $MY_KUBECONFIG get pods")
}
}
}
}
For other credential types
If you need to set credentials in a Pipeline for anything other than secret text, usernames and passwords, or secret files like SSH keys or certificates, use Jenkins' Snippet Generator feature, which you can access through Jenkins' classic UI.

To access the Snippet Generator for your Pipeline project/item:

From the Jenkins Dashboard, select the name of your Pipeline project/item.

In the left navigation pane, select Pipeline Syntax and ensure that the Snippet Generator option is available at the top of the navigation pane.

From the Sample Step field, choose withCredentials: Bind credentials to variables.

Under Bindings, click Add and choose from the dropdown:

SSH User Private Key - to handle SSH public/private key pair credentials, from which you can specify:

Key File Variable - the name of the environment variable that will be bound to these credentials. Jenkins actually assigns this temporary variable to the secure location of the private key file required in the SSH public/private key pair authentication process.

Passphrase Variable ( Optional ) - the name of the environment variable that will be bound to the passphrase associated with the SSH public/private key pair.

Username Variable ( Optional ) - the name of the environment variable that will be bound to username associated with the SSH public/private key pair.

Credentials - choose the SSH public/private key credentials stored in Jenkins. The value of this field is the credential ID, which Jenkins writes out to the generated snippet.

Certificate - to handle PKCS#12 certificates, from which you can specify:

Keystore Variable - the name of the environment variable that will be bound to these credentials. Jenkins actually assigns this temporary variable to the secure location of the certificate’s keystore required in the certificate authentication process.

Password Variable ( Optional ) - the name of the environment variable that will be bound to the password associated with the certificate.

Alias Variable ( Optional ) - the name of the environment variable that will be bound to the unique alias associated with the certificate.

Credentials - choose the certificate credentials stored in Jenkins. The value of this field is the credential ID, which Jenkins writes out to the generated snippet.

Docker client certificate - to handle Docker Host Certificate Authentication.

Click Generate Pipeline Script and Jenkins generates a withCredentials(…​) { …​ } Pipeline step snippet for the credentials you specified, which you can then copy and paste into your Declarative or Scripted Pipeline code.
Notes:

The Credentials fields (above) show the names of credentials configured in Jenkins. However, these values are converted to credential IDs after clicking Generate Pipeline Script.

To combine more than one credential in a single withCredentials(…​) { …​ } Pipeline step, see Combining credentials in one step (below) for details.

SSH User Private Key example

withCredentials(bindings: [sshUserPrivateKey(credentialsId: 'jenkins-ssh-key-for-abc', \
keyFileVariable: 'SSH_KEY_FOR_ABC', \
passphraseVariable: '', \
usernameVariable: '')]) {
// some block
}
The optional passphraseVariable and usernameVariable definitions can be deleted in your final Pipeline code.

Certificate example

withCredentials(bindings: [certificate(aliasVariable: '', \
credentialsId: 'jenkins-certificate-for-xyz', \
keystoreVariable: 'CERTIFICATE_FOR_XYZ', \
passwordVariable: 'XYZ-CERTIFICATE-PASSWORD')]) {
// some block
}
The optional aliasVariable and passwordVariable variable definitions can be deleted in your final Pipeline code.

The following code snippet shows an example Pipeline in its entirety, which implements the SSH User Private Key and Certificate snippets above:

Jenkinsfile (Declarative Pipeline)
pipeline {
agent {
// define agent details
}
stages {
stage('Example stage 1') {
steps {
withCredentials(bindings: [sshUserPrivateKey(credentialsId: 'jenkins-ssh-key-for-abc', \
keyFileVariable: 'SSH_KEY_FOR_ABC')]) {
//
}
withCredentials(bindings: [certificate(credentialsId: 'jenkins-certificate-for-xyz', \
keystoreVariable: 'CERTIFICATE_FOR_XYZ', \
passwordVariable: 'XYZ-CERTIFICATE-PASSWORD')]) {
//
}
}
}
stage('Example stage 2') {
steps {
//
}
}
}
}
Within this step, you can reference the credential environment variable with the syntax $SSH_KEY_FOR_ABC. For example, here you can authenticate to the ABC application with its configured SSH public/private key pair credentials, whose SSH User Private Key file is assigned to $SSH_KEY_FOR_ABC.
Within this step, you can reference the credential environment variable with the syntax $CERTIFICATE_FOR_XYZ and $XYZ-CERTIFICATE-PASSWORD. For example, here you can authenticate to the XYZ application with its configured certificate credentials, whose Certificate's keystore file and password are assigned to the variables $CERTIFICATE_FOR_XYZ and $XYZ-CERTIFICATE-PASSWORD, respectively.
In this Pipeline example, the credentials assigned to the $SSH_KEY_FOR_ABC, $CERTIFICATE_FOR_XYZ and $XYZ-CERTIFICATE-PASSWORD environment variables are scoped only within their respective withCredentials( …​ ) { …​ } steps, so these credential variables are not available for use in this Example stage 2 stage’s steps.
To maintain the security and anonymity of these credentials, if you attempt to retrieve the value of these credential variables from within these withCredentials( …​ ) { …​ } steps, the same behavior described in the Secret text example (above) applies to these SSH public/private key pair credential and certificate variable types too. This only reduces the risk of accidental exposure. It does not prevent a malicious user from capturing the credential value by other means. A Pipeline that uses credentials can also disclose those credentials. Don’t allow untrusted Pipeline jobs to use trusted credentials.

When using the Sample Step field’s withCredentials: Bind credentials to variables option in the Snippet Generator, only credentials which your current Pipeline project/item has access to can be selected from any Credentials field’s list. While you can manually write a withCredentials( …​ ) { …​ } step for your Pipeline (like the examples above), using the Snippet Generator is recommended to avoid specifying credentials that are out of scope for this Pipeline project/item, which when run, will make the step fail.

You can also use the Snippet Generator to generate withCredentials( …​ ) { …​ } steps to handle secret text, usernames and passwords and secret files. However, if you only need to handle these types of credentials, it is recommended you use the relevant procedure described in the section above for improved Pipeline code readability.

The use of single-quotes instead of double-quotes to define the script (the implicit parameter to sh) in Groovy above. The single-quotes will cause the secret to be expanded by the shell as an environment variable. The double-quotes are potentially less secure as the secret is interpolated by Groovy, and so typical operating system process listings will accidentally disclose it :

node {
withCredentials([string(credentialsId: 'mytoken', variable: 'TOKEN')]) {
sh /* WRONG! / """
set +x
curl -H 'Token: $TOKEN' https://some.api/
"""
sh /
CORRECT */ '''
set +x
curl -H 'Token: $TOKEN' https://some.api/
'''
}
}
Combining credentials in one step
Using the Snippet Generator, you can make multiple credentials available within a single withCredentials( …​ ) { …​ } step by doing the following:

From the Jenkins Dashboard, select the name of your Pipeline project/item.

In the left navigation pane, select Pipeline Syntax and ensure that the Snippet Generator option is available at the top of the navigation pane.

From the Sample Step field, choose withCredentials: Bind credentials to variables.

Click Add under Bindings.

Choose the credential type to add to the withCredentials( …​ ) { …​ } step from the dropdown list.

Specify the credential Bindings details. Read more above these in the procedure under For other credential types (above).

Repeat from "Click Add …​" (above) for each (set of) credential/s to add to the withCredentials( …​ ) { …​ } step.

Select Generate Pipeline Script to generate the final withCredentials( …​ ) { …​ } step snippet.

String interpolation
Jenkins Pipeline uses rules identical to Groovy for string interpolation. Groovy’s String interpolation support can be confusing to many newcomers to the language. While Groovy supports declaring a string with either single quotes, or double quotes, for example:

def singlyQuoted = 'Hello'
def doublyQuoted = "World"
Only the latter string will support the dollar-sign ($) based string interpolation, for example:

def username = 'Jenkins'
echo 'Hello Mr. ${username}'
echo "I said, Hello Mr. ${username}"
Would result in:

Hello Mr. ${username}
I said, Hello Mr. Jenkins
Understanding how to use string interpolation is vital for using some of Pipeline’s more advanced features.

Interpolation of sensitive environment variables
Groovy string interpolation should never be used with credentials.

Groovy GStrings (double-quoted or triple-double-quoted strings) expand ${TOKEN} before the command is sent to the agent. When that happens, the secret is copied into the process arguments (visible via tooling such as ps) and any shell metacharacters inside the secret or user-controlled value are executed immediately. Always send a literal command string to sh, bat, powershell, or pwsh and let the shell read secrets from its environment, which withCredentials or the environment block have already populated.

Jenkinsfile (Declarative Pipeline)
pipeline {
agent any
environment {
API_TOKEN = credentials('example-token-id')
}
stages {
stage('Example') {
steps {
/* WRONG /
sh "curl -H 'Authorization: Bearer ${API_TOKEN}' https://example.com"
"""
}
}
}
}
Jenkinsfile (Declarative Pipeline)
pipeline {
agent any
environment {
API_TOKEN = credentials('example-token-id')
}
stages {
stage('Example') {
steps {
/
CORRECT */
sh 'curl -H "Authorization: Bearer ${API_TOKEN}" https://example.com'
}
}
}
}
Any Groovy construct that avoids interpolation (for example, sh(script: 'curl …​ $API_TOKEN', label: 'call API')) is safe; the key is keeping secrets out of GStrings so only the shell expands them.

Injection via interpolation
Groovy string interpolation can inject rogue commands into command interpreters via special characters.

Another note of caution. Using Groovy string interpolation for user-controlled variables with steps that pass their arguments to command interpreters such as the sh, bat, powershell, or pwsh steps can result in problems analogous to SQL injection. This occurs when a user-controlled variable (generally an environment variable, usually a parameter passed to the build) that contains special characters (e.g. / \ $ & % ^ > < | ;) is passed to the sh, bat, powershell, or pwsh steps using Groovy interpolation. For a simple example:

Jenkinsfile (Declarative Pipeline)
pipeline {
agent any
parameters {
string(name: 'STATEMENT', defaultValue: 'hello; ls /', description: 'What should I say?')
}
stages {
stage('Example') {
steps {
/* WRONG! */
sh("echo ${STATEMENT}")
}
}
}
}
In this example, the argument to the sh step is evaluated by Groovy, and STATEMENT is interpolated directly into the argument as if sh('echo hello; ls /') has been written in the Pipeline. When this is processed on the agent, rather than echoing the value hello; ls /, it will echo hello then proceed to list the entire root directory of the agent. Any user able to control a variable interpolated by such a step would be able to make the sh step run arbitrary code on the agent. To avoid this problem, make sure arguments to steps such as sh or bat that reference parameters or other user-controlled environment variables use single quotes to avoid Groovy interpolation.

Jenkinsfile (Declarative Pipeline)
pipeline {
agent any
parameters {
string(name: 'STATEMENT', defaultValue: 'hello; ls /', description: 'What should I say?')
}
stages {
stage('Example') {
steps {
/* CORRECT */
sh('echo ${STATEMENT}')
}
}
}
}
Credential mangling is another issue that can occur when credentials that contain special characters are passed to a step using Groovy interpolation. When the credential value is mangled, it is no longer valid and will no longer be masked in the console log.

Jenkinsfile (Declarative Pipeline)
pipeline {
agent any
environment {
EXAMPLE_KEY = credentials('example-credentials-id') // Secret value is 'sec%ret'
}
stages {
stage('Example') {
steps {
/* WRONG! */
bat "echo ${EXAMPLE_KEY}"
}
}
}
}
Here, the bat step receives echo sec%ret and the Windows batch shell will simply drop the % and print out the value secret. Because there is a single character difference, the value secret will not be masked. Though the value is not the same as the actual credential, this is still a significant exposure of sensitive information. Again, single-quotes avoids this issue.

Jenkinsfile (Declarative Pipeline)
pipeline {
agent any
environment {
EXAMPLE_KEY = credentials('example-credentials-id') // Secret value is 'sec%ret'
}
stages {
stage('Example') {
steps {
/* CORRECT */
bat 'echo %EXAMPLE_KEY%'
}
}
}
}
Handling parameters
Declarative Pipeline supports parameters out-of-the-box, allowing the Pipeline to accept user-specified parameters at runtime via the parameters directive. Configuring parameters with Scripted Pipeline is done with the properties step, which can be found in the Snippet Generator.

If you configured your pipeline to accept parameters using the Build with Parameters option, those parameters are accessible as members of the params variable.

Assuming that a String parameter named "Greeting" has been configured in the Jenkinsfile, it can access that parameter via ${params.Greeting}:

Jenkinsfile (Declarative Pipeline)
pipeline {
agent any
parameters {
string(name: 'Greeting', defaultValue: 'Hello', description: 'How should I greet the world?')
}
stages {
stage('Example') {
steps {
echo "${params.Greeting} World!"
}
}
}
}
Toggle Scripted Pipeline (Advanced)
Handling failure
Declarative Pipeline supports robust failure handling by default via its post section which allows declaring a number of different "post conditions" such as: always, unstable, success, failure, and changed. The Pipeline Syntax section provides more detail on how to use the various post conditions.

Jenkinsfile (Declarative Pipeline)
pipeline {
agent any
stages {
stage('Test') {
steps {
sh 'make check'
}
}
}
post {
always {
junit '*/target/.xml'
}
failure {
mail to: team@example.com, subject: 'The Pipeline failed :('
}
}
}
Toggle Scripted Pipeline (Advanced)
Scripted Pipeline however relies on Groovy’s built-in try/catch/finally semantics for handling failures during execution of the Pipeline.

In the Test example above, the sh step was modified to never return a non-zero exit code (sh 'make check || true'). This approach, while valid, means the following stages need to check currentBuild.result to know if there has been a test failure or not.

An alternative way of handling this, which preserves the early-exit behavior of failures in Pipeline, while still giving junit the chance to capture test reports, is to use a series of try/finally blocks:

Error-handling steps
Jenkins Pipelines provide dedicated steps for flexible error handling, allowing you to control how your Pipeline responds to errors and warnings. These steps help you surface errors and warnings clearly in Jenkins, giving you control over whether the Pipeline fails, continues, or simply reports a warning. For more information, refer to:

catchError

error

unstable

warnError

Using multiple agents
In all the previous examples, only a single agent has been used. This means Jenkins will allocate an executor wherever one is available, regardless of how it is labeled or configured. Not only can this behavior be overridden, but Pipeline allows utilizing multiple agents in the Jenkins environment from within the same Jenkinsfile, which can be helpful for more advanced use-cases such as executing builds/tests across multiple platforms.

In the example below, the "Build" stage will be performed on one agent and the built results will be reused on two subsequent agents, labelled "linux" and "windows" respectively, during the "Test" stage.

Jenkinsfile (Declarative Pipeline)
pipeline {
agent none
stages {
stage('Build') {
agent any
steps {
checkout scm
sh 'make'
stash includes: '/target/*.jar', name: 'app'
}
}
stage('Test on Linux') {
agent {
label 'linux'
}
steps {
unstash 'app'
sh 'make check'
}
post {
always {
junit '
/target/.xml'
}
}
}
stage('Test on Windows') {
agent {
label 'windows'
}
steps {
unstash 'app'
bat 'make check'
}
post {
always {
junit '
/target/.xml'
}
}
}
}
}
Toggle Scripted Pipeline (Advanced)
The stash step allows capturing files matching an inclusion pattern (*/target/.jar) for reuse within the same Pipeline. Once the Pipeline has completed its execution, stashed files are deleted from the Jenkins controller.
The parameter in agent/node allows for any valid Jenkins label expression. Consult the Pipeline Syntax section for more details.
unstash will retrieve the named "stash" from the Jenkins controller into the Pipeline’s current workspace.
The bat script allows for executing batch scripts on Windows-based platforms.
Optional step arguments
Pipeline follows the Groovy language convention of allowing parentheses to be omitted around method arguments.

Many Pipeline steps also use the named-parameter syntax as a shorthand for creating a Map in Groovy, which uses the syntax [key1: value1, key2: value2]. Making statements like the following functionally equivalent:

git url: 'git://example.com/amazing-project.git', branch: 'master'
git([url: 'git://example.com/amazing-project.git', branch: 'master'])
For convenience, when calling steps taking only one parameter (or only one mandatory parameter), the parameter name may be omitted, for example:

sh 'echo hello' /* short form /
sh([script: 'echo hello']) /
long form */
Advanced Scripted Pipeline
Scripted Pipeline is a domain-specific language [3] based on Groovy, most Groovy syntax can be used in Scripted Pipeline without modification.

Parallel execution
The example in the section above runs tests across two different platforms in a linear series. In practice, if the make check execution takes 30 minutes to complete, the "Test" stage would now take 60 minutes to complete!

Fortunately, Pipeline has built-in functionality for executing portions of Scripted Pipeline in parallel, implemented in the aptly named parallel step.

Refactoring the example above to use the parallel step:

Jenkinsfile (Scripted Pipeline)
stage('Build') {
/* .. snip .. */
}

stage('Test') {
parallel linux: {
node('linux') {
checkout scm
try {
unstash 'app'
sh 'make check'
}
finally {
junit '*/target/.xml'
}
}
},
windows: {
node('windows') {
/* .. snip .. */
}
}
}
Instead of executing the tests on the "linux" and "windows" labelled nodes in series, they will now execute in parallel assuming the requisite capacity exists in the Jenkins environment.