4

Using below Git work flow, for any release,

enter image description here

For continuous delivery, my understanding is, two Jenkins pipe lines need to be created, as shown below:

1) Build pipeline that gets triggered on merge of every Feature branch(green) into Develop branch(Purple). Pipeline will push product-x.y-snapshot.jar in Nexus repo. Purpose of this jar is for QA testing

2) Release pipeline that gets triggered on merge of every new Release branch to Master branch. Pipeline will push product-x.y.jar in Nexus repo. This jar goes in production directly.

Both pipelines have auto-tests for every functionality. Same number of autotests run in both pipelines


1) Does two pipelines suffice for stable release? product-x.y.jar with new features added as part of that release

2) If yes, How binary artifactory has to be maintained for both build and release pipeline? using Nexus... please provide any reference

overexchange
  • 15,768
  • 30
  • 152
  • 347

1 Answers1

2

1) This question doesn't have one right answer - in most cases two pipelines (one for feature branches and other for master branch) is sufficient for creating stable releases, but usually teams use three staging environments (for example, see details in this article):

  • Dev environment: for working on develop (or feature) branch and run automation tests;
  • QA environment: for providing a more stable version of the code for testing by QA team;
  • Prod environment: for building production-ready code that is currently on the master branch.

And in that case you can have three pipelines for each staging environment (or one pipeline with parameters for choosing and building each type of environment). There are a lot of examples of Jenkins continuous delivery configuration in the internet.

2) If I understand you correctly, for maintaining artifacts you can use Nexus Platform Plugin (see this example) or Nexus Artifact Uploader to publish a specific artifact from Jenkins to Nexus.

biruk1230
  • 3,042
  • 4
  • 16
  • 29
  • Does pipeline of QA environment create any binary artifactory? – overexchange Jan 14 '19 at 10:04
  • 2
    QA environment usually isn't tied to some branch, the code being deployed to the QA environment corresponds to a specific **git tag** (doesn't matter on which branch, but usually on **develop**). – biruk1230 Jan 14 '19 at 10:04
  • From the article I mentioned above: "many teams overlook the tagging portion of GitFlow, which can be a useful tool in solving this problem. The QA environment represents a release candidate, whether you officially call it that or not. In other words, you can specify the code by tagging it (i.e. 1.3.2-rc.1), or by referencing a commit hash, or the HEAD of any branch (which is just a shortcut to a commit hash). No matter what, the code being deployed to the QA environment corresponds to a unique commit." – biruk1230 Jan 14 '19 at 10:05
  • 1
    Pipeline of QA need only to check existed artifacts, so they don't need to create new artifacts. – biruk1230 Jan 14 '19 at 10:07
  • To satisfy principle, of "every commit is a potential release", we are already adding autotests(unit/integration) for pipeline involved in dev environment.. So, what do you think is the value add factor by inroducing a pipeline in QA environment? – overexchange Jan 14 '19 at 10:23
  • 1
    QA teams usually have their own automation tests (e.g., smoke, regression, functional tests and so on). That's why in such cases you can use dev pipeline for fast unit/integration tests and QA pipeline for other (often slower) tests which are needed only for QA team. – biruk1230 Jan 14 '19 at 10:35
  • when you say: "pipeline of QA need only to check new artifacts", are you referring to the artifact `product-x.y-snapshot.jar` generated using dev pipeline ? – overexchange Jan 14 '19 at 11:30
  • Yes, it's the main reason to create `product-x.y-snapshot.jar` using dev pipeline - to test it by QA team. – biruk1230 Jan 14 '19 at 11:31