Performance testing is a critical aspect of software development, as it ensures that an application can perform optimally under heavy loads and traffic. However, by means of traditional performance testing methods using open-source tools like Apache JMeter, Gatling, K6 etc., in isolation can be laborious, time-consuming, and expensive.

Limitations of Performance Testing 

  • One of the most significant challenges with traditional performance testing methods is the need for extensive maintenance to ensure portability. For example, performance test scripts require specific versions of the software and plugins, which can create compatibility issues when working with multiple versions of open-source performance testing tools. Additionally, scripts require updates for any changes in the application, which can result in an enormous amount of maintenance work.
  • Another limitation of traditional performance testing methods is that they often require significant manual effort like one round of load execution needs 4 to 6 hours of manual effort for data setup, environment sanity check and single user test, etc.,
  • In another instance, when executing multiple load tests in parallel, a single configuration machine can have resource limitations. As a result, the process can become time-consuming, expensive and require a higher level of expertise to handle such load executions.
  • Similarly, monitoring different data points, such as APM, server logs, and performance test results, often requires manually gathering and analysing data from different tools, which can be cumbersome and error-prone.
  • For example, consider a scenario where an e-commerce website needs to handle a high volume of traffic during a flash sale. Using traditional methods, the development team would need to create performance test scripts for each user flow, which would then require extensive maintenance to ensure portability across different versions of the performance testing tool within the team. Load execution under predefined config (cpu and memory) load generators was expensive. Additionally, monitoring different data points, such as server logs, would require manual gathering and analysis, which could be time-consuming.

Solution

To address these challenges, we’ve developed a cost effective end-to-end load execution framework built around JMeter using python that leverages various open-source tools like git, Jenkins, Influxdb, grafana and cloud services.

How we do that?

We are using JMeter for its advantage like open-source, portability as a tool, customisable in build elements and its extensions (plugins)

Appplication_testing

Workflow

  1. Upload required scripts and dependencies in Git. 
  2. Then, trigger Jenkins job with parameters.
  3. Next, it creates a fargate task with required config definitions (like number of cpu) , JMeter and Python versions with dependencies from docker image and parameters.
  4. After task creation completion, Load execution begins and pushes data to InfluxDB and Grafana using a backend listener.
  5. On test completion, JMeter pushes the result file to S3 bucket.
  6. Then, Execute Python scripts to parse test results for JMeter response time table, Grafana Metric Graph Snapshot and NewRelic – Server Metric Graphs.
  7. Finally, Python API script will generate comprehensive reports in Notion/Confluence.

Outcome of the workflow

  1. By using Git to store standardised scripts and dependencies, we’ve made collaboration and code sharing much more efficient. Additionally, our framework uses AWS EC2, ECS, S3, and CloudWatch, along with JMeter, InfluxDB, Grafana, New Relic, and Notion/Confluence, to provide a comprehensive solution for load execution.
  2. With our end-to-end load execution framework, the team can store standardised scripts and dependencies in a centralised repository, enabling more efficient collaboration and code sharing.
  3. Additionally, our framework uses ECS user-defined fargate tasks to generate required load with optimal resource utilisation which is cost effective.
  4. JMeter in turn captures response times, error rates, and user metrics, which are then pushed to InfluxDB and Grafana for visualisation and analysis.
  5. This provides a comprehensive view of the application’s performance, enabling the team to make data-driven decisions and quickly identify any performance issues.

In conclusion, traditional performance testing methods can be challenging, time-consuming, and expensive, but our end-to-end load execution framework offers a comprehensive solution that reduces significant portion of manual work (~3 to 4 hrs for each round of test), expense on unused load generation resource, the dependency on existing script testing for developers, improves tool availability, reduces application testing time, and provides a single source of point for all metrics with a comprehensive report.