Shared Library Pipelines

Shared Libraries are getting the preferred method of providing code that can can be re-used (shared) between pipeline definitions.

The following example pipelines on the Compuware GitHub are built based on shared library principles.

  • Mainframe_CI_Pipeline_from_Shared_Lib.groovy) is intended to be triggered after promoting code within ISPW. It implements the same process as the basic pipleine example.

  • Using Shared Libraries allows re-using existing scripts and combining them to build more complex processes. To demonstrate this, we use the two following pipelines which are part of a more elaborate process. Instead of setting up individual webhooks for each script, we create a third "dispatching" script which gets triggered by several, different ISPW operations. Depending on the operation this script calls one of the two scripts:

    Note

    Both these scripts could still be used independently from each other in corresponding Jenkins jobs.

  • Git to ISPW integration/synchronization

In these pages we describe in detail

Simple Shared Library example Mainframe_CI_Pipeline_from_Shared_Lib

This pipeline executes the following steps after a developer has promoted their code in ISPW:

  • Retrieve the mainframe code from ISPW for later analysis by SonarQube
  • Retrieve Topaz for Total Test test definitions for the corresponding ISPW application from GitHub
  • Execute those test scenarios that belong to the COBOL programs that have been promoted
  • Retrieve the Code Coverage metrics generated during test execution from the mainframe repository
  • Pass all information (sources, test results, code coverage metrics) to SonarQube
  • Receive a Sonar quality gate webhook callback and analyze the status of the quality gate
  • If the quality gate was passed, continue the process by triggering an XL Release release template
  • In either case (passed/failed), send an email to the developer informing them of the status of the pipeline

Comparing the code in this example to the basic example pipeline you will find that they differ only in a few, minor details. One of the differences being that the "hard coded" configuration parameters are externalized into a configuration .yml file, allowing for changes in configuration without havng to change any pipeline code.

A combined scenario

The two pipelines making up the combined scenario are stored in the same Shared Library and use the same principles and same configuration files. They implement two steps in a larger process and get called by a single script exectuting either of them based on the stage in the process.

Calling script Mainframe_Combined_Pipeline

The Jenkins job is configured to use the initial script Mainframe_Combined_Pipeline.jenkinsfile from src/Jenkiksfile folder in the Git underlying the Shared Library, similar to the loading the script from GitHub for the basic pipeline.

The code will determine the ISPW operation triggering the pipeline from the ISPW_Operationparameter, which gets its value from the webhook via the $$operation$$parameter. Based on the value of ISPW_Operation it will

  • call/execute Mainframe_Generate_Pipeline if the value is 'Generate'
  • call/execute Mainframe_Generate_Pipeline if the value is 'Promote'
  • stop execution, if the value is none of the above
@Library('Shared_Lib@master') _

def parmMap = [
...
]

switch(ISPW_Operation) {
    case 'Generate':
        currentBuild.displayName = BUILD_NUMBER + ": Code Generation"
        Mainframe_Generate_Pipeline(parmMap)
        break;
    case 'Promote':
        currentBuild.displayName = BUILD_NUMBER + ": Code Promotion"
        Mainframe_Integration_Pipeline(parmMap)
        break;
    default:
        echo "Unsupported operation " + ISPW_Operation
        echo "Review your Webhook settings"
        break;
}

The parmMap is the same Map of parameters used for the simple Shared Library example.

Setting the currentBuild.displaName property allows distinguishing the different operations the pipeline job is executed for:

currentBuild.displayName

Mainframe_Generate_Pipeline

This pipeline is supposed to be executed every time (COBOL) components get generated within ISPW. It will

  • download those COBOL components that are part of the set resulting from the generate
  • retrieve Topaz for Total Test tests from a GitHub repository for the corresponding stream and application
  • execute those virtualized test scenarios that correspond to the downloaded components
  • retrieve the Code Coverage results from the Code Coverage repository
  • send sources, test results and coverage metrics to SonarQube
  • query the results of the corresponding SonarQube quality gate
  • send a mail message to the owner of the set, informing them of the status of the quality gate

Mainframe_Integration_Pipeline

This pipeline is supposed to be executed every time (COBOL) components get promoted within ISPW. It will

  • download those COBOL components that are part of the assignment, for which the promote was executed
  • retrieve Topaz for Total Test tests from the same GitHub repository for the corresponding stream and application
  • execute all non virtualized test scenarios
  • send sources and test results SonarQube
  • query the results of the corresponding SonarQube quality gate
  • if the quality gate was passed, it will trigger an XLRelease release template to orchestrate the following CD process
  • send a mail message to the owner of the set informing them of the status of the quality gate

Git to ISPW Synchronization

Comparing the code of the two pipeline scripts reveals a lot of similarities and code repetitions. Alternatively, one might consider creating one single script that contains all steps and - based on the situation - executes only those steps that are required at the specific stage.

This is a strategy that has been used for the Git/ISPW synchronization scenario. As a result, there is one script (Git_MainframeCode_Pipeline.groovy) executing different steps based on the branch of the underlying application it is executing for.