Skip to main content

web applications - Training for load testing web apps?





We've discussed the tools used for href="https://serverfault.com/questions/2107/">load href="https://serverfault.com/questions/917/">testing here on ServerFault,
but what about training on how to use them properly? Are there companies that specialize
in IT training that cover load testing? How do you properly come up with a simulated
load? How long should you run the test for? What are the best metrics to be tracking on
the server-side while the test is running? And so on...



Answer







  1. First, start with the business representatives.
    They (should) know the application best. Identify the key transactions, and the end to
    end response times. Ideally, they'll be able to hand you a document which captures their
    non functional requirements. If your application is replacing a legacy application, all
    the better - get as many applicable usage metrics from that app as you can. This is the
    most critical success factor to performance testing. Understanding the size of your
    potential userbase, the number of users likely to be using it concurrently, the # % of
    each one of your key transactions executing simultaneously, growth rate per
    [timeframe].


  2. Build an automated script
    which simulates the key transactions. Include think time in this script. Very few users
    are going to power through your application/website without having to take a few seconds
    to see what the app did in response to their input. Failure to adequately simulate think
    time can result in you subjecting your application to unrealistic load, which leads to
    unhappinesss all around. That being said, the business may identify that 10% of the
    userbase are power users, and you may want to deliver your load with 90% normal users,
    with 'normal' think time, and 10% power users, with faster, more aggressive think
    times.


  3. Add your virtual users over a
    time period (ramp-up time) - don't go from 0-500 in 1 second, unless you will actually
    have that kind of load (sale starts at 9:00 AM!). It's good to understand how your
    application will behave under load spikes, but some apps may fail in these scenarios,
    which is only a problem if you're expecting that kind of load. Otherwise, you may find
    yourself spending a lot more money than what's required to support a load that may never
    come.


  4. Factor in latency and network
    speed. For a stress test, it's great to have a gigabit Ethernet connection with less
    than 1 ms latency to your application, which you can use to push your application to
    determine when it will fail. In reality, though, your users aren't usually that close to
    your application - they're coming over all different types of network
    conditions.


  5. Endurance testing - at
    least 24 hours is recommended, more if you can afford it. You want to capture what
    happens to your application when periodic batch processes run, like backups, antivirus
    definition updates, or even IIS app pool recycles (every 29 hours by
    default).


  6. Understand the difference
    between performance testing and load testing. Load tests will generally show the
    perspective of the server. This isn't entirely true - many tools will show you the time
    a transaction takes in terms of TTLB - but most tools today don't reflect client-side
    rendering times, which are material in JS-heavy applications, or ones that use XSLT, for
    example.


  7. Don't solely rely upon your
    automated test numbers - at least not starting on day one. Periodically manually
    validate the numbers you get back. Over time you can let this subside as you become more
    confident in your
    simulations.


  8. Performance counters -
    every application will vary, but you won't go wrong starting with the four basic food
    groups - cpu, memory, disk i/o, network i/o. A list of my preferred counters is at ht
    tp://www.oneredlight.com/perf.config.txt. You can set your application up to log these
    counters to a 300 MB circular file with the following command line:
    logman
    create counter PERF -f bincirc -max 300 -si 2 --v -o "c:\perflogs\perf"
    -cf

    "perf.config". I've only tried these on windows 2008/IIS 7/SQL
    2008, so your mileage may vary. I would also recommend reading ht
    tp://msdn.microsoft.com/en-us/library/ms998530.aspx, if your application is on the ms
    stack.




(apologies
for the broken urls; new users cant post hyperlinks)



Comments

Popular posts from this blog

linux - iDRAC6 Virtual Media native library cannot be loaded

When attempting to mount Virtual Media on a iDRAC6 IP KVM session I get the following error: I'm using Ubuntu 9.04 and: $ javaws -version Java(TM) Web Start 1.6.0_16 $ uname -a Linux aud22419-linux 2.6.28-15-generic #51-Ubuntu SMP Mon Aug 31 13:39:06 UTC 2009 x86_64 GNU/Linux $ firefox -version Mozilla Firefox 3.0.14, Copyright (c) 1998 - 2009 mozilla.org On Windows + IE it (unsurprisingly) works. I've just gotten off the phone with the Dell tech support and I was told it is known to work on Linux + Firefox, albeit Ubuntu is not supported (by Dell, that is). Has anyone out there managed to mount virtual media in the same scenario?

hp proliant - Smart Array P822 with HBA Mode?

We get an HP DL360 G8 with an Smart Array P822 controller. On that controller will come a HP StorageWorks D2700 . Does anybody know, that it is possible to run the Smart Array P822 in HBA mode? I found only information about the P410i, who can run HBA. If this is not supported, what you think about the LSI 9207-8e controller? Will this fit good in that setup? The Hardware we get is used but all original from HP. The StorageWorks has 25 x 900 GB SAS 10K disks. Because the disks are not new I would like to use only 22 for raid6, and the rest for spare (I need to see if the disk count is optimal or not for zfs). It would be nice if I'm not stick to SAS in future. As OS I would like to install debian stretch with zfs 0.71 as file system and software raid. I have see that hp has an page for debian to. I would like to use hba mode because it is recommend, that zfs know at most as possible about the disk, and I'm independent from the raid controller. For us zfs have many benefits,

apache 2.2 - Server Potentially Compromised -- c99madshell

So, low and behold, a legacy site we've been hosting for a client had a version of FCKEditor that allowed someone to upload the dreaded c99madshell exploit onto our web host. I'm not a big security buff -- frankly I'm just a dev currently responsible for S/A duties due to a loss of personnel. Accordingly, I'd love any help you server-faulters could provide in assessing the damage from the exploit. To give you a bit of information: The file was uploaded into a directory within the webroot, "/_img/fck_uploads/File/". The Apache user and group are restricted such that they can't log in and don't have permissions outside of the directory from which we serve sites. All the files had 770 permissions (user rwx, group rwx, other none) -- something I wanted to fix but was told to hold off on as it wasn't "high priority" (hopefully this changes that). So it seems the hackers could've easily executed the script. Now I wasn't able