Performance Testing – Script Guidelines

In this article, we explore the process of creating a performance test script and the standards they should adhere to so they are easy to maintain, the test results are better understood and they are easy to follow.

The following are the main steps of the process.

  • Application Protocol Identification
  • Recording
  • Script Enhancement
  • Script Replay

Application Protocol Identification

The protocol the application uses must be identified so that the recording can be captured.

Recording

Before any recording can be done, the user journey for the Business Process must be clearly understood. The Business Process must be clearly stated in the Test Plan that has been reviewed and signed off. It is advisable each step in the user journey is measured as a transaction. The script naming convention must make the script identifiable. Below is an example of the naming convention that can be used.

Pt Script Table

The following is an example of a script name.

Pt Script 1

  • The script name has the prefix of the Application Under Test (AUT)
  • Each script will have a unique script number
  • The script code is a 3-letter acronym of the Script/Business Process description
  • The script version can be incremented as follows
    • v00 is the first recording and v01 would be the second and so on
    • a script that has worked on multiple iterations with different data can be v05
    • A final working version used in a first test would be v10, and subsequent updates are v11 and so on

Script Enhancement

After the script has been recorded the script can be enhanced in the following way.

Pt Script 2

 The following is the typical format of a transaction timer name.

Pt Script 3

  • “T” indicates it is a normal timer and not a common timer or end-to-end timer (see below)
  • The script code signifies the script the timer belongs to. It will allow the transactions to be grouped together in the test results
  • The script step tells you where in the Business Process the step occurs and allows the steps to be grouped sequentially in the test results
  • A meaningful ‘step description’ is useful for the understanding of what the step is doing

 

As well as normal transaction timers it may be appropriate to have common timers that are shared between scripts for common steps. This would be in addition to the normal transaction timer and therefore a script step may have two timers, one normal and one common. The common timer allows the timing information to be aggregated across scripts which can be a useful metric. The common timer has the format C_001_Homepage. The “C” indicates it is a common timer. This is followed by the common timer number which would be incremented for the next common timer. The timer name is completed by the step description.

End-to-end timers is the timing for the entire script to execute 1 iteration. The end-to-end timer has the format E_BSP_BuySingleProduct. The “E” indicates it is an end-to-end timer. This is followed by script code and then script description. This is a worthwhile metric to collect as it will automatically show the timing for a script to complete an iteration, especially when the throughput for the Business Process is not achieved in a test. Some performance test tools will collect this statistic by default.

When extracting dynamic values from responses for use in future requests, it is a good idea to prefix the name with “ev” to indicate it is an extracted value (e.g. ev_sessionid). This will quickly differentiate it from other parameters used in the script from data files.

For system-generated parameters such as time, date, or iteration, it is advisable to prefix the name with an underscore. For example, iteration can be named as “_iteration”. This will allow the quick identification of the type of parameter it is.

Script Replay

Script replay will aid in the debugging and enhancement of the script. It will help in the identification and testing of the correlations, that any program logic that has been added works and that the scripts work with different data etc. For this phase, it is a good idea to increase the logging level to help with the debugging, but should be kept to a minimum for actual full-scale tests due to the amount of output it would produce. In this phase, it is advisable to check that the appropriate updates to back-end databases are done which will indicate the scripts are working correctly.

Usually, a test script will initially be checked in a single user mode doing a single iteration which can be increased to perhaps 5 users and 10 iterations to give confidence that the script will work in a full-scale test. After the script is verified it will be ready to be used in test scenarios and the version number of the script can be updated to indicate it is the final version.

In conclusion, it is important to have a standardised process for creating scripts. Adhering to a standard will help in the management of test assets and the ability to analyse the output quickly and efficiently. It will also help other Performance testers to understand and maintain the scripts.

To see how SQA Consulting may assist your company in performance testing your applications, please contact us.

Get In Touch