Skip to main content

web applications - Training for load testing web apps?




We've discussed the tools used for load testing here on ServerFault, but what about training on how to use them properly? Are there companies that specialize in IT training that cover load testing? How do you properly come up with a simulated load? How long should you run the test for? What are the best metrics to be tracking on the server-side while the test is running? And so on...


Answer





  1. First, start with the business representatives. They (should) know the application best. Identify the key transactions, and the end to end response times. Ideally, they'll be able to hand you a document which captures their non functional requirements. If your application is replacing a legacy application, all the better - get as many applicable usage metrics from that app as you can. This is the most critical success factor to performance testing. Understanding the size of your potential userbase, the number of users likely to be using it concurrently, the # % of each one of your key transactions executing simultaneously, growth rate per [timeframe].


  2. Build an automated script which simulates the key transactions. Include think time in this script. Very few users are going to power through your application/website without having to take a few seconds to see what the app did in response to their input. Failure to adequately simulate think time can result in you subjecting your application to unrealistic load, which leads to unhappinesss all around. That being said, the business may identify that 10% of the userbase are power users, and you may want to deliver your load with 90% normal users, with 'normal' think time, and 10% power users, with faster, more aggressive think times.


  3. Add your virtual users over a time period (ramp-up time) - don't go from 0-500 in 1 second, unless you will actually have that kind of load (sale starts at 9:00 AM!). It's good to understand how your application will behave under load spikes, but some apps may fail in these scenarios, which is only a problem if you're expecting that kind of load. Otherwise, you may find yourself spending a lot more money than what's required to support a load that may never come.


  4. Factor in latency and network speed. For a stress test, it's great to have a gigabit Ethernet connection with less than 1 ms latency to your application, which you can use to push your application to determine when it will fail. In reality, though, your users aren't usually that close to your application - they're coming over all different types of network conditions.


  5. Endurance testing - at least 24 hours is recommended, more if you can afford it. You want to capture what happens to your application when periodic batch processes run, like backups, antivirus definition updates, or even IIS app pool recycles (every 29 hours by default).


  6. Understand the difference between performance testing and load testing. Load tests will generally show the perspective of the server. This isn't entirely true - many tools will show you the time a transaction takes in terms of TTLB - but most tools today don't reflect client-side rendering times, which are material in JS-heavy applications, or ones that use XSLT, for example.


  7. Don't solely rely upon your automated test numbers - at least not starting on day one. Periodically manually validate the numbers you get back. Over time you can let this subside as you become more confident in your simulations.


  8. Performance counters - every application will vary, but you won't go wrong starting with the four basic food groups - cpu, memory, disk i/o, network i/o. A list of my preferred counters is at ht tp://www.oneredlight.com/perf.config.txt. You can set your application up to log these counters to a 300 MB circular file with the following command line:
    logman create counter PERF -f bincirc -max 300 -si 2 --v -o "c:\perflogs\perf" -cf

    "perf.config". I've only tried these on windows 2008/IIS 7/SQL 2008, so your mileage may vary. I would also recommend reading ht tp://msdn.microsoft.com/en-us/library/ms998530.aspx, if your application is on the ms stack.




(apologies for the broken urls; new users cant post hyperlinks)


Comments

Popular posts from this blog

linux - iDRAC6 Virtual Media native library cannot be loaded

When attempting to mount Virtual Media on a iDRAC6 IP KVM session I get the following error: I'm using Ubuntu 9.04 and: $ javaws -version Java(TM) Web Start 1.6.0_16 $ uname -a Linux aud22419-linux 2.6.28-15-generic #51-Ubuntu SMP Mon Aug 31 13:39:06 UTC 2009 x86_64 GNU/Linux $ firefox -version Mozilla Firefox 3.0.14, Copyright (c) 1998 - 2009 mozilla.org On Windows + IE it (unsurprisingly) works. I've just gotten off the phone with the Dell tech support and I was told it is known to work on Linux + Firefox, albeit Ubuntu is not supported (by Dell, that is). Has anyone out there managed to mount virtual media in the same scenario?

hp proliant - Smart Array P822 with HBA Mode?

We get an HP DL360 G8 with an Smart Array P822 controller. On that controller will come a HP StorageWorks D2700 . Does anybody know, that it is possible to run the Smart Array P822 in HBA mode? I found only information about the P410i, who can run HBA. If this is not supported, what you think about the LSI 9207-8e controller? Will this fit good in that setup? The Hardware we get is used but all original from HP. The StorageWorks has 25 x 900 GB SAS 10K disks. Because the disks are not new I would like to use only 22 for raid6, and the rest for spare (I need to see if the disk count is optimal or not for zfs). It would be nice if I'm not stick to SAS in future. As OS I would like to install debian stretch with zfs 0.71 as file system and software raid. I have see that hp has an page for debian to. I would like to use hba mode because it is recommend, that zfs know at most as possible about the disk, and I'm independent from the raid controller. For us zfs have many benefits, ...

linux - Awstats - outputting stats for merged Access_logs only producing stats for one server's log

I've been attempting this for two weeks and I've accessed countless number of sites on this issue and it seems there is something I'm not getting here and I'm at a lost. I manged to figure out how to merge logs from two servers together. (Taking care to only merge the matching domains together) The logs from the first server span from 15 Dec 2012 to 8 April 2014 The logs from the second server span from 2 Mar 2014 to 9 April 2014 I was able to successfully merge them using the logresolvemerge.pl script simply enermerating each log and > out_putting_it_to_file Looking at the two logs from each server the format seems exactly the same. The problem I'm having is producing the stats page for the logs. The command I've boiled it down to is /usr/share/awstats/tools/awstats_buildstaticpages.pl -configdir=/home/User/Documents/conf/ -config=example.com awstatsprog=/usr/share/awstats/wwwroot/cgi-bin/awstats.pl dir=/home/User/Documents/parced -month=all -year=all...