Performance is a critical aspect of most applications. Everything may be pretty, really well developed, intuitive and all, but if the performance is not up to par the user experience is going to suck.
Unfortunately most of these performance problems only show themselves after some time in production, making them much more cumbersome to solve and handle.
In this post I'm going to talk about something that should, IMHO, be integrated in the development lifecycle before the application goes live: Load-tests.
I typically talk separately about performance-testing and load-testing. For me performance testing is something you would do with a profiler like dotTrace or ANTS, where you run your application in the context of a user, trigger some functionalities, take some snapshots and then analyze the results. Typically with a profiler you understand the code where most time is spent on.
Load-testing is a different beast. You're testing the ability of the application to, with enough load, be able to handle requests at an acceptable rate.
With a good load testing strategy, and taking into account the target hardware, you'll be able to estimate the capacity of your solution in terms of simultaneous users / requests. Also, and very importantly, you may understand if your solution is able to scale well or not. This is of extreme importance because nowadays it's cheaper (and less riskier) to buy a new server than to pay your developers to refactor/optimize the solution.
With that in mind, in this series of posts I'm going to show some tools that I've used for load testing in the context of ASP.NET MVC, starting with a really basic one called Apache Workbench and then evolving into a much more powerful tool called jMeter.
Apache Worbench
For simpler scenarios nothing beats Apache Workbench. It's a single executable that is able to trigger simultaneous HTTP operations on a specific URL, displaying some really simple metrics.
It's included with Apache, but it's not very practical to install Apache just to get this small executable. You can follow the tips here to get the file.
Now, let's create our sample website. Nothing here will be ASP.NET MVC specific, so use anything that rocks your boat. I'm going to use the empty ASP.NET MVC template and create a really simple Home controller with an index action that returns a view.
After extracting "ab.exe" lets run it at a command prompt. It's usage is:
Usage: ab [options] [http://]hostname[:port]/path Options are: -n requests Number of requests to perform -c concurrency Number of multiple requests to make -t timelimit Seconds to max. wait for responses -b windowsize Size of TCP send/receive buffer, in bytes -p postfile File containing data to POST. Remember also to set -T -u putfile File containing data to PUT. Remember also to set -T -T content-type Content-type header for POSTing, eg. 'application/x-www-form-urlencoded' Default is 'text/plain' -v verbosity How much troubleshooting info to print -w Print out results in HTML tables -i Use HEAD instead of GET -x attributes String to insert as table attributes -y attributes String to insert as tr attributes -z attributes String to insert as td or th attributes -C attribute Add cookie, eg. 'Apache=1234. (repeatable) -H attribute Add Arbitrary header line, eg. 'Accept-Encoding: gzip' Inserted after all normal header lines. (repeatable) -A attribute Add Basic WWW Authentication, the attributes are a colon separated username and password. -P attribute Add Basic Proxy Authentication, the attributes are a colon separated username and password. -X proxy:port Proxyserver and port number to use -V Print version number and exit -k Use HTTP KeepAlive feature -d Do not show percentiles served table. -S Do not show confidence estimators and warnings. -g filename Output collected data to gnuplot format file. -e filename Output CSV file with percentages served -r Don't exit on socket receive errors. -h Display usage information (this message)
Let's start by running it as:
ab -n10 -c1 http://localhost:5454/Home/Index
We're basically issuing 10 sequential requests. The results:
Concurrency Level: 1 Time taken for tests: 0.041 seconds Complete requests: 10 Failed requests: 0 Write errors: 0 Total transferred: 4390 bytes HTML transferred: 260 bytes Requests per second: 243.89 [#/sec] (mean) Time per request: 4.100 [ms] (mean) Time per request: 4.100 [ms] (mean, across all concurrent requests) Transfer rate: 104.56 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.5 0 1 Processing: 1 4 2.4 3 9 Waiting: 1 3 2.4 3 9 Total: 1 4 2.7 3 10 Percentage of the requests served within a certain time (ms) 50% 3 66% 4 75% 5 80% 6 90% 10 95% 10 98% 10 99% 10 100% 10 (longest request)Almost every metric here is useful and should be easy enough to understand. Typically the main metrics that I look for are:
- Requests per second
- The mean time per request
- 95% interval -> meaning in this case that 95% of the times the request time is less or equal to 10 ms (this metric is commonly used in SLA specifications).
Let's make things more interesting. I'm going to change my action to be:
public ActionResult Index() { Thread.Sleep(500); return View(); }Now, running the same command again:
ab -n10 -c1 http://localhost:5454/Home/IndexResults:
Requests per second: 1.99 Time per request: 503.29 (mean) 95%: 504 msMakes perfect sense. We're issuing sequential requests and every one takes half a second to execute.
Now, if we add parallelism to the equation:
ab -n10 -c10 http://localhost:5454/Home/IndexResults:
Requests per second: 9.89 Time per request: 1011.058 (mean) 95%: 1010 msThe performance starts to deteriorate. Although its processing almost 10 per second the mean time per request has surpassed 1 second.
And we can even make it worse:
ab -n100 -c100 http://localhost:5454/Home/IndexResults:
Requests per second: 16.40 Time per request: 6099.349 (mean) 95%: 6069 ms
Almost 6 seconds por
Now, suppose that our SLA stated that we needed to handle 95% of the requests in less than 1 second. Our capacity would be around 10 simultaneous requests.
It's not the focus of this post to try and optimize this code, but just for the fun of it, let's try to understand what's happening here.
While issuing the Thread.Sleep, the thread is literally blocked half-a-second while waiting to resume execution. Obviously your own code would not have a Thread.Sleep (I hope), but the same principle applies while waiting for a synchronous query to the database, or a synchronous filesystem operation, etc.
Let's make our operation asynchronous to see if it does improve the capacity of our application. Fun, fun, fun, let's use the new async/await goodness to create an asynchronous action. The code becomes just:
public async Task<ViewResult>Freeking awesome. Let's test it with the same parameters as before:Index() { await Task.Delay(500); return View(); }
ab -n10 -c1 http://localhost:5454/Home/IndexResults:
Requests per second: 1.96 Time per request: 510.129 (mean) 95%: 517 ms
ab -n10 -c10 http://localhost:5454/Home/IndexResults:
Requests per second: 19.16 Time per request: 522.030 (mean) 95%: 521 ms
ab -n100 -c100 http://localhost:5454/Home/IndexResults:
Requests per second: 162.07 Time per request: 617.035 (mean) 95%: 602 ms
Mean Time per Request (ms) |
No comments:
Post a Comment