Introduction

初步了解pipeline后,接下来我们深入探讨下背后的技术。我们将针对一个发布过程进行深度剖析。

In the following section we will describe in more depth the rationale behind the presented opinionated pipeline. We will go through each deployment step and describe it in details.

Project setup

.
├── common
├── concourse
├── docs
└── jenkins

common 通用pipeline逻辑脚本目录,这些脚本支持Concourse以及Jenkins pipelines.

concourse Concourse脚本目录,仅适用于Concourse.

docs 文档目录

jenkins Jenkins脚本目录,仅适用于Jenkins.

How to use it?

This repository can be treated as a template for your pipeline. We provide some opinionated implementation that you can alter to suit your needs. The best approach to use it to build your production projects would be to download the Spring Cloud Pipelines repository as ZIP, then init a Git project there and modify it as you wish.

Example for using the code from master branch. You can use a tag for example 1.0.0.M2. Then your URL will look like this [https://github.com/spring-cloud/spring-cloud-pipelines/archive/1.0.0.M2.zip](https://github.com/spring-cloud/spring-cloud-pipelines/archive/1.0.0.M2.zip) .

curl -LOk https://github.com/spring-cloud/spring-cloud-pipelines/archive/master.zip
unzip master.zip
cd spring-cloud-pipelines-master
git init
# modify the pipelines to suit your needs
git add .
git commit -m "Initial commit"
git remote add origin ${YOUR_REPOSITORY_URL}
git push origin master
Note Why aren’t you simply cloning the repo? This is meant to be a seed for building new, versioned pipelines for you. You don’t want to have all of our history dragged along with you, don’t you?

The flow

Let’s take a look at the flow of the opinionated pipeline

flow concourseFigure 1. Flow in ConcourseflowFigure 2. Flow in Jenkins

We’ll first describe the overall concept behind the flow and then we’ll split it into pieces and describe every piece independently.

Environments

So we’re on the same page let’s define some common vocabulary. We discern 4 typical environments in terms of running the pipeline.

  • build

  • test

  • stage

  • prod

Build environment is a machine where the building of the application takes place. It’s a CI / CD tool worker.

Test is an environment where you can deploy an application to test it. It doesn’t resemble production, we can’t be sure of it’s state (which application is deployed there and in which version). It can be used by multiple teams at the same time.

Stage is an environment that does resemble production. Most likely applications are deployed there in versions that correspond to those deployed to production. Typically databases there are filled up with (obfuscated) production data. Most often this environment is a single, shared one between many teams. In other words in order to run some performance, user acceptance tests you have to block and wait until the environment is free.

Prod is a production environment where we want our tested applications to be deployed for our customers.

Tests

Unit tests - tests that are executed on the application during the build phase. No integrations with databases / HTTP server stubs etc. take place. Generally speaking your application should have plenty of these to have fast feedback if your features are working fine.

Integration tests - tests that are executed on the built application during the build phase. Integrations with in memory databases / HTTP server stubs take place. According to the test pyramid, in most cases you should have not too many of these kind of tests.

Smoke tests - tests that are executed on a deployed application. The concept of these tests is to check the crucial parts of your application are working properly. If you have 100 features in your application but you gain most money from e.g. 5 features then you could write smoke tests for those 5 features. As you can see we’re talking about smoke tests of an application, not of the whole system. In our understanding inside the opinionated pipeline, these tests are executed against an application that is surrounded with stubs.

End to end tests - tests that are executed on a system composing of multiple applications. The idea of these tests is to check if the tested feature works when the whole system is set up. Due to the fact that it takes a lot of time, effort, resources to maintain such an environment and that often those tests are unreliable (due to many different moving pieces like network database etc.) you should have a handful of those tests. Only for critical parts of your business. Since only production is the key verifier of whether your feature works, some companies don’t even want to do those and move directly to deployment to production. When your system contains KPI monitoring and alerting you can quickly react when your deployed application is not behaving properly.

Performance testing - tests executed on an application or set of applications to check if your system can handle big load of input. In case of our opinionated pipeline these tests could be executed either on test (against stubbed environment) or stage (against the whole system)

Testing against stubs

Before we go into details of the flow let’s take a look at the following example.

monolithFigure 3. Two monolithic applications deployed for end to end testing

When having only a handful of applications, performing end to end testing is beneficial. From the operations perspective it’s maintainable for a finite number of deployed instances. From the developers perspective it’s nice to verify the whole flow in the system for a feature.

In case of microservices the scale starts to be a problem:

many microservicesFigure 4. Many microservices deployed in different versions

The questions arise:

  • Should I queue deployments of microservices on one testing environment or should I have an environment per microservice?

    • If I queue deployments people will have to wait for hours to have their tests ran - that can be a problem
  • To remove that issue I can have an environment per microservice

    • Who will pay the bills (imagine 100 microservices - each having each own environment).

    • Who will support each of those environments?

    • Should we spawn a new environment each time we execute a new pipeline and then wrap it up or should we have them up and running for the whole day?

  • In which versions should I deploy the dependent microservices - development or production versions?

    • If I have development versions then I can test my application against a feature that is not yet on production. That can lead to exceptions on production

    • If I test against production versions then I’ll never be able to test against a feature under development anytime before deployment to production.

One of the possibilities of tackling these problems is to…​ not do end to end tests.

stubbed dependenciesFigure 5. Execute tests on a deployed microservice on stubbed dependencies

If we stub out all the dependencies of our application then most of the problems presented above disappear. There is no need to start and setup infrastructure required by the dependant microservices. That way the testing setup looks like this:

stubbed dependenciesFigure 6. We’re testing microservices in isolation

Such an approach to testing and deployment gives the following benefits (thanks to the usage of Spring Cloud Contract):

  • No need to deploy dependant services

  • The stubs used for the tests ran on a deployed microservice are the same as those used during integration tests

  • Those stubs have been tested against the application that produces them (check Spring Cloud Contract for more information)

  • We don’t have many slow tests running on a deployed application - thus the pipeline gets executed much faster

  • We don’t have to queue deployments - we’re testing in isolation thus pipelines don’t interfere with each other

  • We don’t have to spawn virtual machines each time for deployment purposes

It brings however the following challenges:

  • No end to end tests before production - you don’t have the full certainty that a feature is working

  • First time the applications will talk in a real way will be on production

Like every solution it has its benefits and drawbacks. The opinionated pipeline allows you to configure whether you want to follow this flow or not.

General view

The general view behind this deployment pipeline is to:

  • test the application in isolation

  • test the backwards compatibility of the application in order to roll it back if necessary

  • allow testing of the packaged app in a deployed environment

  • allow user acceptance tests / performance tests in a deployed environment

  • allow deployment to production

Obviously the pipeline could have been split to more steps but it seems that all of the aforementioned actions comprise nicely in our opinionated proposal.

Opinionated implementation

For the demo purposes we’re providing Docker Compose setup with Artifactory and Concourse / Jenkins tools. Regardless of the picked CD application for the pipeline to pass one needs a Cloud Foundry instance (for example Pivotal Web Services or PCF Dev) and the infrastructure applications deployed to the JAR hosting application (for the demo we’re providing Artifactory). The infrastructure applications are Eureka for Service Discovery and Stub Runner Boot for running Spring Cloud Contract stubs.

Tip In the demos we’re showing you how to first build the github-webhook project. That’s because the github-analytics needs the stubs of github-webhook to pass the tests. Below you’ll find references to github-analytics project since it contains more interesting pieces as far as testing is concerned.
Build

buildFigure 7. Build and upload artifacts

In this step we’re generating a version of the pipeline and then we’re publishing 2 artifacts to Artifactory / Nexus:

  • a fat jar of the application

  • a Spring Cloud Contract jar containing stubs of the application

During this phase we’re executing a Maven build using Maven Wrapper or a Gradle build using Gradle Wrapper , with unit and integration tests. We’re also tagging the repository with dev/${version} format. That way in each subsequent step of the pipeline we’re able to retrieve the tagged version. Also we know exactly which version of the pipeline corresponds to which Git hash.

Test

testFigure 8. Smoke test and rollback test on test environment

Here we’re

  • starting a RabbitMQ service in Cloud Foundry

  • deploying Eureka infrastructure application to Cloud Foundry

  • downloading the fat jar from Nexus and we’re uploading it to Cloud Foundry. We want the application to run in isolation (be surrounded by stubs). Currently due to port constraints in Cloud Foundry we cannot run multiple stubbed HTTP services in the cloud so to fix this issue we’re running the application with smoke Spring profile on which you can stub out all HTTP calls to return a mocked response

  • if the application is using a database then it gets upgraded at this point via Flyway, Liquibase or any other tool once the application gets started

  • from the project’s Maven or Gradle build we’re extracting stubrunner.ids property that contains all the groupId:artifactId:version:classifier notation of dependant projects for which the stubs should be downloaded.

  • then we’re uploading Stub Runner Boot and pass the extracted stubrunner.ids to it. That way we’ll have a running application in Cloud Foundry that will download all the necessary stubs of our application

  • from the checked out code we’re running the tests available under the smoke profile. In the case of GitHub Analytics application we’re triggering a message from the GitHub Webhook application’s stub, that is sent via RabbitMQ to GitHub Analytics. Then we’re checking if message count has increased. You can check those tests here.

  • once the tests pass we’re searching for the last production release. Once the application is deployed to production we’re tagging it with prod/${version} tag. If there is no such tag (there was no production release) there will be no rollback tests executed. If there was a production release the tests will get executed.

  • assuming that there was a production release we’re checking out the code corresponding to that release (we’re checking out the tag), we’re downloading the appropriate fat jar and we’re uploading it to Cloud Foundry. IMPORTANT the old jar is running against the NEW version of the database.

  • we’re running the old smoke tests against the freshly deployed application surrounded by stubs. If those tests pass then we have a high probability that the application is backwards compatible

  • the default behaviour is that after all of those steps the user can manually click to deploy the application to a stage environment

Stage

stageFigure 9. End to end tests on stage environment

Here we’re

  • starting a RabbitMQ service in Cloud Foundry

  • deploying Eureka infrastructure application to Cloud Foundry

  • downloading the fat jar from Nexus and we’re uploading it to Cloud Foundry.

Next we have a manual step in which:

  • from the checked out code we’re running the tests available under the e2e profile. In the case of GitHub Analytics application we’re sending a HTTP message to GitHub Analytic’s endpoint. Then we’re checking if the received message count has increased. You can check those tests here.

The step is manual by default due to the fact that stage environment is often shared between teams and some preparations on databases / infrastructure have to take place before running the tests. Ideally these step should be fully automatic.

Prod

prodFigure 10. Deployment to production

The step to deploy to production is manual but ideally it should be automatic.

Here we’re

  • starting a RabbitMQ service in Cloud Foundry (only for the demo to pass - you should provision the prod environment in a different way)

  • deploying Eureka infrastructure application to Cloud Foundry (only for the demo to pass - you should provision the prod environment in a different way)

  • tagging the Git repo with prod/${version} tag

  • downloading the fat jar from Nexus

  • we’re doing Blue Green deployment on Cloud Foundry

  • we’re renaming the current instance of the app e.g. fooService to fooService-venerable

  • we’re deploying the new instance of the app under the fooService name

  • now two instances of the same application are running on production

  • in the Complete switch over which is a manual step

  • we’re deleting the old instance

  • remember to run this step only after you have confirmed that both instances are working fine!

results matching ""

    No results matching ""