What metrics are reported back for a URL?
For each URL in a test, Load Impact reports what status codes the tested server has responded with. For each of those status codes, the max, min, and average response time, and the count of each response with that particular status code per URL.
On the test results page, there is an tab “URLs” where you see a list of every URL included in the test, and some statistics about the URL. This is how it looks:
|URL||Unique resource requested during the tests|
|Load Zone/User Scenario||The name of the Load zone and User Scenario the request originated from|
|Method||HTTP Verb used to make the request|
|Status||Status code returned|
|Count||Count of how many times the resource was requested during the test|
|Size/Compressed||Size of the resource|
|Min/Max||Minimum and maximum response time for the resource|
|Avg||Average response time for this resource|
Note: Among the status codes you will normally not see any 3xx redirects, as the default behavior is to follow a redirect and then report the transaction time as the time from when the first (redirected) request was made and until the final (non-redirected) transaction was completed. This behaviour can be overridden – refer to the
http.request method and the
http.request_batch method in our Load Script API.
More about results metrics
Response times, and other metrics, are measured continually throughout the test, in 3-second intervals. The load generator will record HTTP transaction times for each individual URL and every 3 seconds it reports the maximum, minimum and average transaction time seen for that URL during the most recent 3-second interval.
Currently these metrics are stored once every 3 seconds during a test, and can be graphed:
|Metric||Description||Tagged with |
|Maximum transaction time||Highest transaction time seen during 3-second sample period||URL, HTTP response code, load zone, user scenario|
|Minimum transaction time||Lowest transaction time seen during 3-second sample period||URL, HTTP response code, load zone, user scenario|
|Average transaction time||Average transaction time seen during 3-second sample period||URL, HTTP response code, load zone, user scenario|
|Number of transactions||Number of transactions seen during 3-second sample period||URL, HTTP response code, load zone, user scenario|
|User load time||Approximate load time as experienced by a user ||load zone|
|Accumulated||load time Sum of all individual transaction times during period||load zone|
|Bandwidth usage||Average bandwidth usage during 3-second sample period||load zone|
|HTTP requests/second||Average HTTP requests per second issued during 3-second sample period||load zone|
|Failure rate||Number of failed HTTP transactions per second during 3-second sample period||load zone|
|Clients active||Number of concurrent, simulated clients active at the end of the 3-second sample period||load zone|
|Connections active||Number of concurrent TCP connections used at the end of the 3-second sample period||load zone|
|Load generator memory utilization||How large a percentage of system memory the load generator application uses on the load generator hosts||load zone|
|Load generator CPU utilization||How large a percentage of system CPU is being used by the load generator application on the load generator hosts||load zone|
 A result metric being tagged with something means that you can easily separate data based on the tag. I.e. if “Bandwidth usage” is tagged with “load zone” and you have executed a load test using the Tokyo and the Dublin load zones, it means you can plot one bandwidth graph displaying the bandwidth usage between your web server and Tokyo, and another graph displaying the bandwidth usage between your web server and Dublin. Transaction times are tagged with several things, which enables very advanced comparisons – i.e. you can plot a graph showing the maximum response times for the URL “http://www.dom.ain/index.html” when accessed by the Tokyo load zone alongside with the average reponse times for URL “http://www.dom.ain/index2.html” when accessed by the Dublin load zone.
 User load time is the average time it has taken user scenarios (any user scenarios) to run to completion. Note that only time spent actively loading things is counted here – sleep statements in a load script are excluded. A user scenario will commonly take a lot more than 3 seconds to complete, however, so the data for each 3-second period will consist of the user scenario load times that were reported during this 3-second period (i.e. the actual user scenario execution would have happened earlier).