Jenkins DSL Pipeline

The repository contains job definitions and the opinionated setup pipeline using Jenkins Job Dsl plugin. Those jobs will form an empty pipeline and a sample, opinionated one that you can use in your company.

All in all there are the following projects taking part in the whole microservice setup for this demo.

  • Github-Analytics - the app that has a REST endpoint and uses messaging. Our business application.

  • Github Webhook - project that emits messages that are used by Github Analytics. Our business application.

  • Eureka - simple Eureka Server. This is an infrastructure application.

  • Github Analytics Stub Runner Boot - Stub Runner Boot server to be used for tests with Github Analytics. Uses Eureka and Messaging. This is an infrastructure application.

Project setup

.
├── jobs
│   ├── jenkins_pipeline_empty.groovy
│   ├── jenkins_pipeline_jenkinsfile_empty.groovy
│   ├── jenkins_pipeline_sample.groovy
│   └── jenkins_pipeline_sample_view.groovy
├── seed
│   ├── gradle.properties
│   ├── init.groovy
│   ├── jenkins_pipeline.groovy
│   └── settings.xml
└── src
    ├── main
    └── test

In the jobs folder you have all the seed jobs that will generate pipelines.

  • jenkins_pipeline_empty.groovy - is a template of a pipeline with empty steps using the Jenkins Job DSL plugin

  • jenkins_pipeline_jenkinsfile_empty.groovy - is a template of a pipeline with empty steps using the Pipeline plugin

  • jenkins_pipeline_sample.groovy - is an opinionated implementation using the Jenkins Job DSL plugin

  • jenkins_pipeline_sample_view.groovy - builds the views for the pipelines

In the seed folder you have the init.groovy file which is executed when Jenkins starts. That way we can configure most of Jenkins options for you (adding credentials, JDK etc.). jenkins_pipeline.groovy contains logic to build a seed job (that way you don’t have to even click that job - we generate it for you).

In the src folder you have production and test classes needed for you to build your own pipeline. Currently we have tests only cause the whole logic resides in the jenkins_pipeline_sample file.

Step by step

If you want to just run the demo as far as possible using PCF Dev and Docker Compose

Below you can find

optional

steps needed to be taken when you want to customize the pipeline

Fork repos

There are 4 apps that are composing the pipeline

You need to fork only these. That’s because only then will your user be able to tag and push the tag to repo.

Start Jenkins and Artifactory

Jenkins + Artifactory can be ran locally. To do that just execute the start.sh script from this repo.

git clone https://github.com/spring-cloud/spring-cloud-pipelines
cd spring-cloud-pipelines/jenkins
./start.sh yourGitUsername yourGitPassword yourForkedGithubOrg

Then Jenkins will be running on port 8080 and Artifactory 8081. The provided parameters will be passed as env variables to Jenkins VM and credentials will be set in your set. That way you don’t have to do any manual work on the Jenkins side. In the above parameters, the third parameter could be yourForkedGithubOrg or yourGithubUsername. Also the REPOS env variable will contain your GitHub org in which you have the forked repos.

Deploy the infra JARs to Artifactory

When Artifactory is running, just execute the tools/deploy-infra.sh script from this repo.

git clone https://github.com/spring-cloud/spring-cloud-pipelines
cd spring-cloud-pipelines/
./tools/deploy-infra.sh

As a result both eureka and stub runner repos will be cloned, built and uploaded to Artifactory.

Start PCF Dev

Tip You can skip this step if you have CF installed and don’t want to use PCF Dev The only thing you have to do is to set up spaces.
Warning It’s more than likely that you’ll run out of resources when you reach stage step. Don’t worry! Keep calm and clear some apps from PCF Dev and continue.

You have to download and start PCF Dev. A link how to do it is available here.

The default credentials when using PCF Dev are:

username: user
password: pass
email: user
org: pcfdev-org
space: pcfdev-space
api: api.local.pcfdev.io

You can start the PCF dev like this:

cf dev start

You’ll have to create 3 separate spaces (email admin, pass admin)

cf login -a https://api.local.pcfdev.io --skip-ssl-validation -u admin -p admin -o pcfdev-org

cf create-space pcfdev-test
cf set-space-role user pcfdev-org pcfdev-test SpaceDeveloper
cf create-space pcfdev-stage
cf set-space-role user pcfdev-org pcfdev-stage SpaceDeveloper
cf create-space pcfdev-prod
cf set-space-role user pcfdev-org pcfdev-prod SpaceDeveloper

You can also execute the ./tools/pcfdev-helper.sh setup-spaces to do this.

Run the seed job

We already create the seed job for you but you’ll have to run it. When you do run it you have to provide some properties. By default we create a seed that has all the properties options, but you can delete most of it. If you set the properties as global env variables you have to remove them from the seed.

Anyways, to run the demo just provide in the REPOS var the comma separated list of URLs of the 2 aforementioned forks of github-webhook and `github-analytics'.

seed clickStep 1: Click the 'jenkins-pipeline-seed' job

seed runStep 2: Click the 'Build with parameters'

seedStep 3: Provide the REPOS parameter with URLs of your forks (you’ll have more properties than the ones in the screenshot)

seed builtStep 4: This is how the results of seed should look like

Run the github-webhook pipeline

We already create the seed job for you but you’ll have to run it. When you do run it you have to provide some properties. By default we create a seed that has all the properties options, but you can delete most of it. If you set the properties as global env variables you have to remove them from the seed.

Anyways, to run the demo just provide in the REPOS var the comma separated list of URLs of the 2 aforementioned forks of github-webhook and github-analytics.

seed viewsStep 1: Click the 'github-webhook' view

pipeline runStep 2: Run the pipeline

pipeline run propsStep 3: You can set some properties (just click 'Build' to proceed)

Important Most likely your 1st build will suddenly hang for 10 minutes. If you rerun it again it should work after 2-3 minutes. My guess is that it’s related to Docker Compose so sorry for this unfortunate situation.

| Important | If your build fails on the deploy previous version to stage due to missing jar, that means that you’ve forgotten to clear the tags in your repo. Typically that’s due to the fact that you’ve removed the Artifactory volume with deployed JAR whereas a tag in the repo is still pointing there.

Check out this section on how to remove the tag.

| | --- | --- |

pipeline manualStep 4: Click the manual step to go to stage (remember about killing the apps on test env). To do this click the ARROW next to the job name

Important Most likely you will run out of memory so when reaching the stage environment it’s good to kill all apps on test. Check out the FAQ section for more details!

pipeline finishedStep 5: The full pipeline should look like this

Optional steps

All the steps below are not necessary to run the demo. They are needed only when you want to do some custom changes.

Deploying infra jars to a different location

It’s enough to set the ARTIFACTORY_URL environmental variable before executing tools/deploy-infra.sh. Example for deploying to Artifactory at IP 192.168.99.100

git clone https://github.com/spring-cloud/spring-cloud-pipelines
cd spring-cloud-pipelines/
ARTIFACTORY_URL="http://192.168.99.100:8081/artifactory/libs-release-local" ./tools/deploy-infra.sh

Setup settings.xml for Maven deployment

Tip If you want to use the default connection to the Docker version of Artifactory you can skip this step

So that ./mvnw deploy works with Artifactory from Docker we’re already copying the missing settings.xml file for you. It looks like this:

<server>
  <id>artifactory-local</id>
  <username>admin</username>
  <password>password</password>
</server>

If you want to use your own version of Artifactory / Nexus you have to update the file (it’s in seed/settings.xml).

Setup Jenkins env vars

If you want to only play around with the demo that we’ve prepared you have to set ONE variable which is the REPOS variable. That variable needs to consists of comma separated list of URLs to repositories containing business apps. So you should pass your forked repos URLs.

You can do it in the following ways:

  • globally via Jenkins global env vars (then when you run the seed that variable will be taken into consideration and proper pipelines will get built)

  • modify the seed job parameters (you’ll have to modify the seed job configuration and change the REPOS property)

  • provide the repos parameter when running the seed job

For the sake of simplicity let’s go with the last option.

Important If you’re choosing the global envs, you HAVE to remove the other approach (e.g. if you set the global env for REPOS, please remove that property in the seed job
Seed properties

Click on the seed job and pick Build with parameters. Then as presented in the screen below (you’ll have far more properties to set) just modify the REPOS property by providing the comma separated list of URLs to your forks. Whatever you set will be parsed by the seed job and passed to the generated Jenkins jobs.

Tip This is very useful when the repos you want to build differ. E.g. use different JDK. Then some seeds can set the JDK_VERSION param to one version of Java installation and the others to another one.

Example screen:

seed

In the screenshot we could parametrize the REPOS and REPO_WITH_JARS params.

Global envs
Important This section is presented only for informational purposes - for the sake of demo you can skip it

You can add env vars (go to configure Jenkins → Global Properties) for the following properties (the defaults are for PCF Dev):

Example screen:

env vars

All env vars

The env vars that are used in all of the jobs are as follows:

Property Name Property Description Default value
GIT_NAME The name used by Git to tag repo Pivo Tal
CF_TEST_API_URL The URL to the CF Api for TEST env api.local.pcfdev.io
CF_STAGE_API_URL The URL to the CF Api for STAGE env api.local.pcfdev.io
CF_PROD_API_URL The URL to the CF Api for PROD env api.local.pcfdev.io
CF_TEST_ORG Name of the org for the test env pcfdev-org
CF_TEST_SPACE Name of the space for the test env pcfdev-space
CF_STAGE_ORG Name of the org for the stage env pcfdev-org
CF_STAGE_SPACE Name of the space for the stage env pcfdev-space
CF_PROD_ORG Name of the org for the prod env pcfdev-org
CF_PROD_SPACE Name of the space for the prod env pcfdev-space
REPO_WITH_JARS URL to repo with the deployed jars http://artifactory:8081/artifactory/libs-release-local
M2_SETTINGS_REPO_ID The id of server from Maven settings.xml artifactory-local
JDK_VERSION The name of the JDK installation jdk8
PIPELINE_VERSION What should be the version of the pipeline (ultimately also version of the jar) 1.0.0.M1-${GROOVY,script ="new Date().format('yyMMdd_HHmmss')"}-VERSION
GIT_EMAIL The email used by Git to tag repo [email protected]

Set Git email / user

Since our pipeline is setting the git user / name explicitly for the build step you’d have to go to Configure of the build step and modify the Git name / email. If you want to set it globally you’ll have to remove the section from the build step and follow these steps to set it globally.

You can set Git email / user globally like this:

manage jenkinsStep 1: Click 'Manage Jenkins'

configure systemStep 2: Click 'Configure System'

gitStep 3: Fill out Git user information

Jenkins Credentials

In your scripts we reference the credentials via IDs. These are the defaults for credentials

Property Name Property Description Default value
CF_PROD_CREDENTIAL_ID Credential ID for CF Prod env access cf-prod
GIT_CREDENTIAL_ID Credential ID used to tag a git repo git
CF_TEST_CREDENTIAL_ID Credential ID for CF Test env access cf-test
CF_STAGE_CREDENTIAL_ID Credential ID for CF Stage env access cf-stage

If you already have in your system a credential to for example tag a repo you can use it by passing the value of the property GIT_CREDENTIAL_ID

Add Jenkins credentials for GitHub

The scripts will need to access the credential in order to tag the repo.

You have to set credentials with id: git.

Below you can find instructions on how to set a credential (e.g. for cf-test credential but remember to provide the one with id git).

credentials systemStep 1: Click 'Credentials, System'

credentials globalStep 2: Click 'Global Credentials'

credentials addStep 3: Click 'Add credentials'

credentials exampleStep 4: Fill out the user / password and provide the git credential ID (in this example cf-test)

Enable Groovy Token Macro Processing

With scripted that but if you needed to this manually then this is how to do it:

manage jenkinsStep 1: Click 'Manage Jenkins'

configure systemStep 2: Click 'Configure System'

groovy tokenStep 3: Click 'Allow token macro processing'

Docker Image

If you would like to run the pre-configured Jenkins image somewhere other than your local machine, we have an image you can pull and use on DockerHub. The latest tag corresponds to the latest snapshot build. You can also find tags corresponding to stable releases that you can use as well.

FAQ

Pipeline version contains ${PIPELINE_VERSION}

You can check the Jenkins logs and you’ll see

WARNING: Skipped parameter `PIPELINE_VERSION` as it is undefined on `jenkins-pipeline-sample-build`.
    Set `-Dhudson.model.ParametersAction.keepUndefinedParameters`=true to allow undefined parameters
    to be injected as environment variables or
    `-Dhudson.model.ParametersAction.safeParameters=[comma-separated list]`
    to whitelist specific parameter names, even though it represents a security breach

To fix it you have to do exactly what the warning suggests…​ Also ensure that the Groovy token macro processing checkbox is set.

Pipeline version is not passed to the build

You can see that the Jenkins version is properly set but in the build version is still snapshot and the echo &quot;${PIPELINE_VERSION}&quot; doesn’t print anything.

You can check the Jenkins logs and you’ll see

WARNING: Skipped parameter `PIPELINE_VERSION` as it is undefined on `jenkins-pipeline-sample-build`.
    Set `-Dhudson.model.ParametersAction.keepUndefinedParameters`=true to allow undefined parameters
    to be injected as environment variables or
    `-Dhudson.model.ParametersAction.safeParameters=[comma-separated list]`
    to whitelist specific parameter names, even though it represents a security breach

To fix it you have to do exactly what the warning suggests…​

The build times out with pipeline.sh info

Docker compose, docker compose, docker compose…​ The problem is that for some reason, only in Docker, the execution of Java hangs. But it hangs randomly and only the first time you try to execute the pipeline.

The solution to this is to run the pipeline again. If once it suddenly, magically passes then it will pass for any subsequent build.

Another thing that you can try is to run it with plain Docker. Maybe that will help.

Can I use the pipeline for some other repos?

Sure! you can pass REPOS variable with comma separated list of project_name$project_url format. If you don’t provide the PROJECT_NAME the repo name will be extracted and used as the name of the project.

E.g. for REPOS equal to:

[https://github.com/spring-cloud-samples/github-analytics,https://github.com/spring-cloud-samples/github-webhook](https://github.com/spring-cloud-samples/github-analytics,https://github.com/spring-cloud-samples/github-webhook)

will result in the creation of pipelines with root names github-analytics and github-webhook.

E.g. for REPOS equal to:

foo$https://github.com/spring-cloud-samples/github-analytics,bar$https://github.com/spring-cloud-samples/atom-feed

will result in the creation of pipelines with root names foo for github-analytics and bar for github-webhook.

Will this work for ANY project out of the box?

Not really. This is an opinionated pipeline that’s why we took some opinionated decisions like:

  • usage of Spring Cloud, Spring Cloud Contract Stub Runner and Spring Cloud Eureka

  • application deployment to Cloud Foundry

  • For Maven:

    • usage of Maven Wrapper

    • artifacts deployment by ./mvnw clean deploy

    • stubrunner.ids property to retrieve list of collaborators for which stubs should be downloaded

    • running smoke tests on a deployed app via the smoke Maven profile

    • running end to end tests on a deployed app via the e2e Maven profile

  • For Gradle (in the github-analytics application check the gradle/pipeline.gradle file):

    • usage of Gradlew Wrapper

    • deploy task for artifacts deployment

    • running smoke tests on a deployed app via the smoke task

    • running end to end tests on a deployed app via the e2e task

    • groupId task to retrieve group id

    • artifactId task to retrieve artifact id

    • currentVersion task to retrieve the current version

    • stubIds task to retrieve list of collaborators for which stubs should be downloaded

This is the initial approach that can be easily changed in the future.

Can I modify this to reuse in my project?

Sure! It’s open-source! The important thing is that the core part of the logic is written in Bash scripts. That way, in the majority of cases, you could change only the bash scripts without changing the whole pipeline.

I ran out of resources!!

[jenkins_resources]] When deploying the app to stage or prod you can get an exception Insufficient resources. The way to solve it is to kill some apps from test / stage env. To achieve that just call

cf target -o pcfdev-org -s pcfdev-test
cf stop github-webhook
cf stop github-eureka
cf stop stubrunner

You can also execute ./tools/pcfdev-helper.sh kill-all-apps that will remove all demo-related apps deployed to PCF dev.

The rollback step fails due to missing JAR ?!

You must have pushed some tags and have removed the Artifactory volume that contained them. To fix this, just remove the tags

git tag -l | xargs -n 1 git push --delete origin

I want to provide a different JDK version

  • by default we assume that you have jdk with id jdk8 configured

  • if you want a different one just override JDK_VERSION env var and point to the proper one

Tip The docker image comes in with Java installed at /usr/lib/jvm/java-8-openjdk-amd64. You can go to Global Tools and create a JDK with jdk8 id and JAVA_HOME pointing to /usr/lib/jvm/java-8-openjdk-amd64

To change the default one just follow these steps:

manage jenkinsStep 1: Click 'Manage Jenkins'

global toolStep 2: Click 'Global Tool'

jdk installationStep 3: Click 'JDK Installations'

jdkStep 4: Fill out JDK Installation with path to your JDK

And that’s it!

I want deployment to stage and prod be automatic

No problem, just set the property / env var to true

  • AUTO_DEPLOY_TO_STAGE to automatically deploy to stage

  • AUTO_DEPLOY_TO_PROD to automatically deploy to prod

I can’t tag the repo!

When you get sth like this:

19:01:44 stderr: remote: Invalid username or password.
19:01:44 fatal: Authentication failed for 'https://github.com/marcingrzejszczak/github-webhook/'
19:01:44
19:01:44     at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1740)
19:01:44     at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1476)
19:01:44     at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.access$300(CliGitAPIImpl.java:63)
19:01:44     at org.jenkinsci.plugins.gitclient.CliGitAPIImpl$8.execute(CliGitAPIImpl.java:1816)
19:01:44     at hudson.plugins.git.GitPublisher.perform(GitPublisher.java:295)
19:01:44     at hudson.tasks.BuildStepMonitor$3.perform(BuildStepMonitor.java:45)
19:01:44     at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:779)
19:01:44     at hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:720)
19:01:44     at hudson.model.Build$BuildExecution.post2(Build.java:185)
19:01:44     at hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:665)
19:01:44     at hudson.model.Run.execute(Run.java:1745)
19:01:44     at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
19:01:44     at hudson.model.ResourceController.execute(ResourceController.java:98)
19:01:44     at hudson.model.Executor.run(Executor.java:404)

most likely you’ve passed a wrong password. Check the credentials section on how to update your credentials.

Deploying to test / stage / prod fails - error finding space

If you receive a similar exception:

20:26:18 API endpoint:   https://api.local.pcfdev.io (API version: 2.58.0)
20:26:18 User:           user
20:26:18 Org:            pcfdev-org
20:26:18 Space:          No space targeted, use 'cf target -s SPACE'
20:26:18 FAILED
20:26:18 Error finding space pcfdev-test
20:26:18 Space pcfdev-test not found

It means that you’ve forgotten to create the spaces in your PCF Dev installation.

The route is already in use

If you play around with Jenkins / Concourse you might end up with the routes occupied

Using route github-webhook-test.local.pcfdev.io
Binding github-webhook-test.local.pcfdev.io to github-webhook...
FAILED
The route github-webhook-test.local.pcfdev.io is already in use.

Just delete the routes

yes | cf delete-route local.pcfdev.io -n github-webhook-test
yes | cf delete-route local.pcfdev.io -n github-eureka-test
yes | cf delete-route local.pcfdev.io -n stubrunner-test
yes | cf delete-route local.pcfdev.io -n github-webhook-stage
yes | cf delete-route local.pcfdev.io -n github-eureka-stage
yes | cf delete-route local.pcfdev.io -n github-webhook-prod
yes | cf delete-route local.pcfdev.io -n github-eureka-prod

You can also execute the ./tools/pcfdev-helper.sh delete-routes

I’m unauthorized to deploy infrastructure jars

Most likely you’ve forgotten to update your local settings.xml with the Artifactory’s setup. Check out this section of the docs and update your settings.xml.

How to build it

./gradlew clean build

Warning The ran test only checks if your scripts compile.

How to work with Jenkins Job DSL plugin

Check out the tutorial. Provide the link to this repository in your Jenkins installation.

| Warning | Remember that views can be overridden that’s why the suggestion is to contain in one script all the logic needed to build a view for a single project (check out that spring_cloud_views.groovy is building all the spring-cloud views). | | --- | --- |

results matching ""

    No results matching ""