BlogArticles over here were reposted from different websites with content improvements or any adjustment to the content originator website for better understanding. Please look for the actual author website about content creation.

  Best Practices in Software Performance Engineering


Introduction

In today’s competitive and ever changing world, where most software applications are complex in nature having distributed, multi-tiered, Web-based architectures, one of the key success factors is the application’s actual performance.

Imagine a scenario where your application is great in design with a fantastic look and feel and the most user friendly GUI, but is really poor when it comes to performance, with users needing to wait for several minutes to complete a single transaction. Add to that a very high system down time due to failures, frequent restarts and incorrect operations under heavy traffic conditions. All these factors would understandably lead to losses in revenue, market share and the company’s reputation.

Software performance testing thus plays an essential and significant role in the entire software testing cycle; the quicker the problems related to performance are found in the development cycle, more easily and cost effectively they can be fixed by providing the necessary hardware and/or software solutions.

This blog focuses on the performance engineering practices that we follow here in Xoriant to deliver high quality software on-time that meets the required performance objectives within the allocated budget.

Performance testing types

Following are some of the phases of software testing that an application usually goes though during the testing cycle:

 Load Testing: Determine if the application can meet a desired service level under real world volumes.

 Spike Testing: Simulate a sudden increase in the number of concurrent users performing a specific transaction to determine the application’s behavior under abnormal traffic conditions.

 Endurance Testing/Soak Testing: Determine if the application can sustain a continuous and expected load for an extended period of time. Usually during endurance tests, memory utilization is also monitored to detect potential memory leaks, since small amounts of memory leaks on a system can turn out to be a severe problem during peak traffic conditions leading to performance degradation and resulting in the system running out of memory very often.

 Stress Testing: Determine the application’s breaking point under maximum load conditions that the application is designed to handle (typically specified in terms of number of concurrent users/transactions, memory and disk space becoming full etc).

 Scalability Testing: Determine whether the application under test is able to gracefully handle increases in workload by upgrading the hardware (for example, by adding extra memory and disks to the system).

The performance testing process

The following diagram shows the processes involved in the entire performance testing cycle along with the flow between them.

 

Processes involved in the performance testing cycle

Processes involved in the performance testing cycle

Requirements analysis

During the requirements stage, the system performance criteria are captured and documented to get them approved by the customer. This process helps to identify testing objectives, understand system components and its configuration, and comprehend system usage in the production environment.

Performance test planning

Based on the requirements plan, the key activities of performance testing are planned, which include:

• Test tools’ identification and finalization.
• Preparing various checklists to be used during actual execution.
• Identifying test resources includes hardware, software and human resources.
• Identifying the baseline for the test setup.
• Identifying the risk involved in performance testing.
• Estimating and planning the performance test cycle.

Testing tools’ identification and finalization

Based on the requirements, identify the tools which can support the number of simultaneous virtual users as fixed in the test plan, the cost of testing, the number of features to be supported in the particular build and the platform it will support. Because most of the times these requirements are fulfilled by good open source performance testing tools, investing in proprietary and licensed tools must be properly thought over.

Some of the popular performance testing tools in use these days are Load Runner by HP, SilkPerformer by Borland, Webload (open source), JMeter (open source) and Microsoft Web Application Stress by Microsoft

Preparing the various checklists

Having a checklist for performance testing is extremely important. This helps in creating a better definition of the required performance and tracking the requirements. In the absence of properly defined performance testing requirements, teams may have to spend a great deal of time either in getting the information during actual testing which could delay the actual testing, or in rework required in the project because of insufficient information.

Checklist to ensure availability of the correct performance testing environment

The following is the list of points to be verified to ensure that the performance testing environment is correctly set up:

• Does the test environment exist?
• Is the environment self-contained?
• Are all required licenses in place?
• Are systems for the load testing identified?
• Is the environment modeled close to the production environment?
• Is a copy of production data available for testing?
• Are replacement servers available?
• Are the tools and utilities identified compatible with the environment?
• Is load balancing available?
• Are system deployment diagrams in place?
• Are testers well versed with the environment?

Checklist to ensure proper performance test planning

• Are people with the required skill sets available?
• Have version control processes been used to ensure the correct versions of applications and data in the test environment?
• When will the application be ready for performance testing?
• How much time is available for performance testing?
• How many iterations of testing will take place?
• Have external and internal support staffs been identified?
• Have back-up procedures been written?
• Have any contingency situations been considered and provisioned for?

Identifying the test resources

Identify suitable hardware and software which should replicate the production environment because using the same hardware and software as in the actual production environment is neither cost effective nor feasible. Also, identifying the human resources in the beginning will help you plan the required trainings for the application’s understanding and its usage as well as using the testing tools and learning any third party software that is required in the testing phase. This in turn helps the testers feel confident while executing the performance tests.

Identifying baselines for the test setup

Identify the baselines for the test setup for all types of tests that you are going to perform. The setup should be a replica of the real environment. This has two advantages: First, the results from various categories of tests do not reflect the type of hardware you are using. Second, you can depend on the test results because the test setup is close to the production environment.

Identifying the risks

Performance testing is a risk-driven and time consuming activity. If the level of risk is small, the effort involved in performance testing activities can be correspondingly small and if the risk is high, then a significant amount of effort is needed. Hence, identify these risks in the beginning and document them in the plan to build a common undersigning for all stakeholders, and define the clear expectations for each throughout the performance testing cycle.

Estimate and plan the performance test cycle

Because performance testing is a risk-driven and time consuming activity, precise estimation is very much important which will help define the early performance testing cycle so that any bottlenecks found can be fixed and retested during the test cycle itself.

Test design

This phase includes designing and developing the test cases for each performance test type. Prioritize these test cases and test scenarios according to the critical functionalities, the risk involved in the use cases and the high traffic/volume conditions that need to be tested first. The deliverables for this phase are the test case design document which is the input criterion for the test scrip and test suite creation activities.

Test script and test suite creation

This phase includes developing the test scripts and designing a test suite. This includes actual performance test execution scripts as well as system monitoring scripts.

Test execution

The test execution phase includes running the actual tests and collecting statistics for analysis. The following items needs to be considered while actually executing the performance tests:

• Always execute the performance tests in a controlled environment. Testing without dedicated servers and good configuration management will yield test results that are not reproducible. If you cannot reproduce the results then you cannot measure the improvements accurately when the next version of the system is ready for testing.

• Clear the application and database logs after each performance test run since excessively large log files occupy the disk increasing the chances of the system running out of space during the test. It can also artificially skew the performance results.

• If the application uses a database, then both the application and the database should be installed on two different systems.

• Validate the test setup by sending a few requests before you actually start the complete test.

• Prioritize the test cases and test scenarios that need to be executed first so that important bottleneck issues are identified early.

• Always generate the load by using different client computers.

• Make sure that the client and server systems are located on the same network; otherwise it may introduce latency in the request/response times.

• Always refer a single client to capture client-side data such as response time or requests per second. You should consolidate data at a single client and generate results based on the average values while a load is generated on the system.

• Include a buffer time between the incremental increases of users during a load test.

• Use a zero think time if you need to fire concurrent requests. This can help you identify bottleneck issues.

• Monitor all computers involved in the test, including the client that generates the load. This is important because you should not overly stress the client.

• Do not allow the test system resources to cross resource threshold limits by a significant margin during load testing, because this may affect the test data and test results.

• Do not run tests in live production environments that have other network traffic. Use an isolated test environment that is representative of the actual production environment.

• Do not place too much stress on the client test computers. The processor and memory usage on your client computers should be well below the threshold limit (CPU: 75 percent). Otherwise, chances to die/hang for the client are very high and destroying the test result and necessitating re-execution of the tests.

• Do not try to break the system during a load test. The intent of the load test is not to break the system, but to observe performance under expected usage conditions. You can stress test the system to determine the most likely modes of failure so they can be addressed out.

During the test, the system should be closely monitored to capture the following metrics and statistics at regular intervals of time:

• CPU utilization
• Memory usage identifying memory and thread leakages
• Disk space
• All queues and IO waits
• Network usage on individual clients, application and database servers (bytes, packets, segments, frames received and sent per sec, bytes total/sec, current bandwidth, connection failures, connections active, failures at network interface level and protocol level)
• Web server (requests and responses per second, services succeeded and failed, server problems if any)
• Request and response times
• Throughput
• Time (session time, reboot time, transaction time, task execution time)
• Hits per second, requests per second, transactions per second
• Database problems (settings and configuration, usage, read/sec, write/sec, any locking, queries, compilation errors)

Test analysis

This is the most important phase. At the end of the test execution, the test results and its data are analyzed for system performance and bottlenecks are identified.

A preliminary snapshot of these analyses is sent to the development group to fix the issues or performance tuning recommendations. Once fixes or tuning is applied on the test system, the tests are re-executed and hence, test execution and test analysis processes are iterative processes and need to be executed until the desirable performance is achieved.

Identifying bottlenecks and tuning the system

If any bottlenecks are found in the system, then the performance of the system can be enhanced by improving the architecture, design, code and configuration. And tuning of the code can be done using developer support tools such as profilers and code coverage analysis tools.

The following considerations should be taken into account while applying any performance optimization:

• Always practice change control. Make backups of all configuration files you alter, so you can revert to an older version if required.

• Take a baseline of the performance before and after changing any settings.

• Document the changes. This helps others understand why the changes were made.

• Don’t apply any changes to the test system unless you have a reason to. If everything is working well enough and performance data are acceptable by the customer, then why to make any changes? In such a case, you should contact the customer with the suggestions and apply the changes only after they have been approved by the customer.

Test results and reporting

At the end of the performance tests, test results are correlated and summarized in an executive summary report which includes the following contents:

• Objectives of the test
• Reference documents
• Test setup (hardware configurations, software configurations, tool settings, list of transactions, scenarios tested, client machine details and its configurations, transaction generation pattern etc.)
• Test results (list of significant runs, scalability statistics, request and response time statistics, server side statistics etc.)
• Findings, recommendations and test log

Knowledge repository

Maintaining a proper knowledge repository is the most important part of the performance testing cycle since it is a link between the current and future activities as it contains the record of all the activities performed during performance testing. The objective of this is to capture the knowledge gained during the project which can be used in future releases.

So summarizing, we saw in this blog that there are several steps and processes that need to be followed to ensure that a software development project progresses in the intended manner and within the available time and resource limits. Thankfully, many of these processes have now been made easy by good quality and easily available software tools.



Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.