JENKINS

Aditya Kumar
8 min readMar 12, 2021

Introduction

Jenkins automates building, testing, reporting, packaging, staging, deploying the application by using plugins. Jenkins offers a simple way to set up a continuous integration or continuous delivery environment for almost any combination of languages and source code repositories using pipelines, as well as automating other routine development tasks. While Jenkins doesn’t eliminate the need to create scripts for individual steps, it does give you a faster and more robust way to integrate your entire chain of build, test and deployment tools than you can easily build yourself.

Advantages of Jenkins.

There are a number of advantages in using Jenkins while developing software, some of them are mentioned below:

  1. Easy to use
  2. The user interface is simple and intuitive
  3. Extremely flexible and easy to adapt to your purposes
  4. It has over 1000 plugins supporting communication, integration, and testing to numerous external applications and if the plugin is not available, you can easily create one.
  5. It has a simple configuration through a web-based GUI, which speeds up job creation, improves consistency and decreases the maintenance costs.
  6. Allows consistent scripting across operating systems.
  7. The Jenkins tool is written in Java and thus it can be portable to most of the major platforms.

Continuous Integration

Before jumping into Jenkins, one should have a clear concept of Continuous Integration (CI). Continuous Integration can be said as the cornerstone of the software development process and is used to integrate various DevOps stages. It forces the defects in a software cycle to emerge early rather than waiting for software to be fully produced.

Continuous Integration basically involves making small changes to software and the building as well as applying quality assurance processes. Using the Jenkins tool in CI allows the code to build, deployed and tested automatically without many efforts.

Continuous Delivery (CD)

Continuous delivery is the ability to make changes of all types — such as new features, configuration changes, error fixes, experiments — into production in a safe and efficient manner using short work cycles.

The main goal in continuous delivery is to make deployments predictable as routine activities that can be achieved upon request. To be successful, the code needs to always be in a deployable state even when there is a scenario with lots of developers working and making changes on a daily basis. All of the code progress and changes are delivered in a nonstop way with high quality and low risks. The end result is one or more artifacts that can be deployed to production.

Continuous Deployment (CD)

Continuous deployment, also known as continuous implementation, is an advanced stage of continuous delivery that the automation process does not end at the delivery stage. In this methodology, every change that is validated at the automatic testing stage is later implemented at the production stage.

The fail fast strategy is always of the utmost importance when deploying to production. Since every change is deployed to production, it is possible to identify edge cases and unexpected behaviors that would be very hard to identify with automated tests. To fully take advantage of continuous deployment, it is important to have solid logging technology that allows you to identify the increasing error count on newer versions. In addition, a trustworthy orchestration technology like Kubernetes that will allow the new version to slowly be deployed to users until the full rollout or an incident is detected and the version is canceled.

Automation

As a job executor, Jenkins can be used to automate repetitive tasks like backup/restore databases, turn on or turn off machines, collect statistics about a service and other tasks. Since every job can be scheduled, repetitive tasks can have a desired time interval (like once a day, once a week, every fifth day of the month, and so forth).

Why Use It

  • Faster Development

Pulling the entire code for building and testing can consume a lot of time. Jenkins helps in automate building and testing systems to the integration work.

  • Better Software Quality

While developing software, generally the issues are detected and resolved before it is completed which makes it a better software with quality assurance while saving a lot of money to the organisation.

  • Easily Customisable

A developer can easily use Jenkins with multiple plugins and you can also customise and bring multiple possibilities in using the software. The plugins are categorised on the Jenkins website and a user must follow the special instructions while installing the plugins.

  • Effortless Auditing Of Previous Run

There is no need for spending time on human efforts while capturing the console output. Jenkins capture console output for both stdout and stderr while running jobs. Also, the distribution method of Jenkins enables you to send a developer’s work across multiple platforms without any struggle.

  • Large Community

Jenkins has grown large community support and has many plugins available including GitHub, Slack, Docker, etc. by which the project is kept as well-maintained and updated. You can also join in the community of Jenkins extensively and interact with the developers, share feedbacks and views on further improvements, etc.

NETFLIX : A Case Study

Integrate

Once a line of code has been built and tested locally using Nebula, it is ready for continuous integration and deployment. The first step is to push the updated source code to a git repository. Teams are free to find a git workflow that works for them.

Once the change is committed, a Jenkins job is triggered. Our use of Jenkins for continuous integration has evolved over the years. We started with a single massive Jenkins master in our datacenter and have evolved to running 25 Jenkins masters in AWS. Jenkins is used throughout Netflix for a variety of automation tasks above just simple continuous integration.

A Jenkins job is configured to invoke Nebula to build, test and package the application code. If the repository being built is a library, Nebula will publish the .jar to our artifact repository. If the repository is an application, then the Nebula ospackage plugin will be executed. Using the Nebula ospackage (short for “operating system package”) plugin, an application’s build artifact will be bundled into either a Debian or RPM package, whose contents are defined via a simple Gradle-based DSL. Nebula will then publish the Debian file to a package repository where it will be available for the next stage of the process, “baking”.

Bake

Our deployment strategy is centered around the Immutable Server pattern. Live modification of instances is strongly discouraged in order to reduce configuration drift and ensure deployments are repeatable from source. Every deployment at Netflix begins with the creation of a new Amazon Machine Image, or AMI. To generate AMIs from source, we created “the Bakery”.

The Bakery exposes an API that facilitates the creation of AMIs globally. The Bakery API service then schedules the actual bake job on worker nodes that use Aminator to create the image. To trigger a bake, the user declares the package to be installed, as well the foundation image onto which the package is installed. That foundation image, or Base AMI, provides a Linux environment customized with the common conventions, tools, and services required for seamless integration with the greater Netflix ecosystem.

When a Jenkins job is successful, it typically triggers a Spinnaker pipeline. Spinnaker pipelines can be triggered by a Jenkins job or by a git commit. Spinnaker will read the operating system package generated by Nebula, and call the Bakery API to trigger a bake.

Deploy

Once a bake is complete, Spinnaker makes the resultant AMI available for deployment to tens, hundreds, or thousands of instances. The same AMI is usable across multiple environments as Spinnaker exposes a runtime context to the instance which allows applications to self-configure at runtime. A successful bake will trigger the next stage of the Spinnaker pipeline, a deploy to the test environment. From here, teams will typically exercise the deployment using a battery of automated integration tests. The specifics of an application’s deployment pipeline becomes fairly custom from this point on. Teams will use Spinnaker to manage multi-region deployments, canary releases, red/black deployments and much more. Suffice to say that Spinnaker pipelines provide teams with immense flexibility to control how they deploy code.

The Road Ahead

Taken together, these tools enable a high degree of efficiency and automation. For example, it takes just 16 minutes to move our cloud resiliency and maintenance service, Janitor Monkey, from code check-in to a multi-region deployment.

A Spinnaker bake and deploy pipeline triggered from Jenkins.

That said, we are always looking to improve the developer experience and are constantly challenging ourselves to do it better, faster, and while making it easier.

One challenge we are actively addressing is how we manage binary dependencies at Netflix. Nebula provides tools focused on making Java dependency management easier. For instance, the Nebula dependency-lock plugin allows applications to resolve their complete binary dependency graph and produce a .lock file which can be versioned. The Nebula resolution rules plugin allows us to publish organization-wide dependency rules that impact all Nebula builds. These tools help make binary dependency management easier, but still fall short of reducing the pain to an acceptable level.

Another challenge we are working to address is bake time. It wasn’t long ago that 16-minutes from commit to deployment was a dream, but as other parts of the system have gotten faster, this now feels like an impediment to rapid innovation. From the Simian Army example deployment above, the bake process took 7 minutes or 44% of the total bake and deploy time. We have found the biggest drivers of bake time to be installing packages (including dependency resolution) and the AWS snapshot process itself.

As Netflix grows and evolves, there is an increasing demand for our build and deploy toolset to provide first-class support for non-JVM languages, like JavaScript/Node.js, Python, Ruby and Go. Our current recommendation for non-JVM applications is to use the Nebula ospackage plugin to produce a Debian package for baking, leaving the build and test pieces to the engineers and the platform’s preferred tooling. While this solves the needs of teams today, we are expanding our tools to be language agnostic.

Containers provide an interesting potential solution to the last two challenges and we are exploring how containers can help improve our current build, bake, and deploy experience. If we can provide a local container-based environment that closely mimics that of our cloud environments, we potentially reduce the amount of baking required during the development and test cycles, improving developer productivity and accelerating the overall development process. A container that can be deployed locally just as it would be in production without modification reduces cognitive load and allows our engineers to focus on solving problems and innovating rather than trying to determine if a bug is due to environmental differences.

I Hope you find this Article useful.

Any Suggestion you want to give feel free to connect with me on LinkedIn, Here is the link of my LinkedIn profile.

Thank you Everyone For Reading ..!!

--

--

Aditya Kumar
0 Followers

Full Stack Web Developer | Linux System Administrator | Ansible | Kubernetes | Django | AWS | Coder