From Data to Decisions: A Deep Dive into Apache DevLake’s Dashboards

One of our payments technology-related customers was managing a large application with multiple services in isolated repositories. It is very challenging to manage and analyze  DevOps data across multiple tools like GitLab, Jira, Jenkins, and Sonarqube due to on-premise infrastructure constraints.

One of our payments technology-related customers was managing a large application with multiple services in isolated repositories. It is very challenging to manage and analyze  DevOps data across multiple tools like GitLab, Jira, Jenkins, and Sonarqube due to on-premise infrastructure constraints.

Some of the key challenges that the customer encountered are:

  • Tracking all code changes, commits, issues, and pull requests across multiple repositories from GitLab is crucial to ensure secure and efficient development
  • As part of managing a wide range of tasks, bugs, and feature requests, Without a proper overview of issue tracking, sprint progress, and backlog management, it becomes difficult for them to prioritize work effectively from Jira
  • Keeping track of pipeline performance, build failures, and deployment status across different environments without a consolidated view slows down the customer release cycle and causes risks in production systems
  • Identifying code vulnerabilities, bugs, and potential risks across various services without a unified view leads the customer to miss the issues and security threats in the source code
  • Faced issues in understanding the new deployment processes and identifying areas for further improvement

Managing key metrics in a large application with multiple services can be challenging, leading to increased manual efforts in data collection. This often results in frequent errors and inconsistencies in reports, impacting decision-making and diverting engineering teams from continuous improvement. Leveraging cloud DevOps services, organizations can implement automated monitoring, centralized logging, and real-time analytics, ensuring accurate metric tracking, reduced manual workload, and improved operational efficiency. This enables engineering teams to focus on innovation and process optimization rather than troubleshooting data discrepancies.

To successfully implement continuous delivery, you need to change the culture of how an entire organization views software development efforts

We responded with a transformational solution by proposing that DevLake address these challenges.

  • As DevLake came in, It aligned very well with client business needs. It will collect the data from GitLab, Jira, Jenkins, and SonarQube through integrated plugins,  store the data in MySQL, and visualize on the Grafana dashboard
  • This way DevLake not only united data across several tools, But also made the tracing of key metrics easier, reduced manual data collection, and minimized errors allowing engineering teams to focus on continuous improvement rather than operational overhead

Now, let’s take a closer look at a real-world example where Apache DevLake metrics proved to be a game-changer for a customer’s project.

Pre-requisites:

  • Install DevLake Using Helm/Docker Compose
  • Add connections to your data sources and configure the endpoints (Eg: Jenkins, gitLab, SonarQube and Jira)
  • Collect data from different DevSecOps tools by creating a project

Note: Please refer to the Introducing Apache DevLake Blog for DevLake installation and project creation.

Streamline data connections across tools like Azure DevOps, GitHub, and Jira for seamless project insights
  • Once done with DevLake setup, Redirect to the dashboard page as shown in the above image
  • The dashboard looks like the below image
Apache DevLake delivers role-based dashboards and actionable insights to streamline engineering and OSS workflows
  • Once the dashboard is loaded successfully, go to the By  Data Source section in which you can view all the integrated plugins dashboard
Seamlessly integrate data from GitLab, Jenkins, Jira, and SonarQube to deliver role-specific engineering insights

Understanding the DevLake Dashboards:

I will take you through a few of the critical metrics captured on the DevLake dashboard, Showcasing how they can help teams to gain insights of important KPIs. Here are the major metrics tracked on the dashboards from the integrated plugins:

GitLab:

Pull Request dashboard highlighting contribution stats, top contributors, and handling metrics, with 553 new PRs and a 28.4% non-merging ratio
  • The above gitlab dashboard has 2 important metrics, 1. Contribution (PRs), 2. How PRs are handled?These metrics are sub-categorised based on the collected data as shown in above image.

Contribution (PRs):

This section mainly  focuses on the volume and impact of contributions made by developers through pull requests (PRs). The data is segregated based on selected time intervals.

How PRs Are Handled:

This section  mainly focuses on how the pull requests are being closed, merged, and the time it takes for them to be resolved.

Jenkins:

Jenkins dashboard showcasing build performance with 644 successful builds, a 58.1% success rate, and an average build duration of 1.5 minutes

As we can see in the above image, the Jenkins dashboard has captured four Important metrics, Let deep dive into each one:

  • Total Number of Successful Builds (Selected Time Range):

A consistent number of successful builds helps to track how well the  code changes are being integrated, tested, and deployed within the CI/CD pipeline

  • Mean Build Success Rate:

Build success rate allows the teams to be able to identify the number of successful runs compared to failures, thus leading to the analysis of bottlenecks or recurring issues that may be causing the builds to fail

  • Total Build Result Distribution:

This metric provides a detailed view of how often builds succeed, fail, or cancel/abort. It helps  teams quickly to detect unusual behavior

Eg: a sudden spike in failed builds could indicate a problem with the code or the pipeline configuration that needs immediate attention

  • Mean Build Duration in Minutes:

This metric provides the average time it will take builds to complete successfully. Optimization of the build duration becomes necessary for keeping the general development process efficient

Jira:

Issue tracking dashboard displaying 620 created issues, an 87.1% delivery rate, and a mean lead time of 16.8 days

From jira Dashboard,  Issue Throughput and Issue Lead Time are the two essential metrics for understanding a team’s performance and workflow efficiency.

  • Issue Throughput:

Issue throughput refers to the number of issues that a team delivers over the total number of issues within a specific time frame.

  • Issue Lead Time:

Issue lead time measures the total time taken from when an issue is created until it is completed.

The above two metrics are sub-categorized for easy analysis. With these metrics, teams will be in a position to improve their decisions to enhance productivity and deliver good results to stakeholders.

SonarQube:

Sonarqube dashboard metrics are used to identify the potential problems or issues in the code. Issues can include bugs, vulnerabilities, and code smells, which can all affect the maintainability, reliability, and security of the codebase.

The dashboard summarizes GitLab pull request (PR) contributions and handling, highlighting 553 PRs created, 28.4% non-merged ratio, and 156 PRs closed without merging

By identifying and addressing these issues, developers can improve the quality of their code and reduce technical debt. The above image gives you an idea of how the dashboard looks.

To fetch all the issues on a gitlab project, Navigate to the SonarQube dashboard to view and collect the metrics.

Click on this link to learn more on Sonarqube metrics.

DORA(DevOps Research and Assessment):

DORA is an important metric that helps to measure and improve the software development practices of the teams and projects to deliver reliable products. The below image shows how the DORA metrics are visible on the DORA dashboard.

The dashboard presents DORA metrics, showcasing high deployment frequency (2 days/month), elite change failure rate (7.1%), high median lead time (1.2 hours), and medium service restoration time (113.3 minutes)
  • There are two significant clusters of data in DORA: Velocity and Stability. The DORA framework is more concerned about keeping them in perspective with each other, as a whole rather than independent variables.

Under Velocity, We have two important metrics.

  1. Deployment Frequency: Number of successful deployments to production, i.e How rapidly the application reaches to the user.
  2. Lead Time for Changes:This is an important metric which defines  How long does it take from commit to the code running in production?

i.e how quickly your team can respond to user requirements.

Similarly for Stability, we have  two metrics

  1. Change Failure Rate: This metric shows  how often are your deployments failing?
  2. Median Time to Restore Service (MTTR): How long does it take the team to properly recover from a failure once it is identified?

So, The DORA metrics are helping the client in understanding the effectiveness of the new deployment processes and identifying areas for further improvement in SDLC.

Summary:

The Apache Devlake tool for our client has improved their management, analysis, and actionability regarding DevOps metrics in a drastic manner. It has unified multiple data sources and has provided real-time insights to let them efficiently run the business by cutting costs and ease of scaling up operations while focusing on providing superior payment solutions to the customers.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Spotify
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound