What"s the best way to measure performance for a web application?

What"s the best way to measure performance for a web application?

am 15.04.2008 03:47:45 von DLN

I'm hoping that this is the proper place to ask a question regarding
performance/benchmarking testing for an .Net application running on W2K3/IIS
6. If this isn't the correct forum, I apologize ahead of time.

I'm looking for the best approach to accurately measuring performance for a
web based application. However I'm at a complete loss as to what to measure
and what measurements constitute a healthy, good performing application.

I've done a bit of digging around and have downloaded the WCAT testing tool
from the IIS Resource Kit. I decided that I could use a stock install of
WSS 3.0 to get a baseline, against which I could measure our application's
performance. Even here I might be taking the wrong approach, but the WSS
site has a good mix of web/DB transactions, which is similar to what our
in-house app does. Since SharePoint sees regular use in industry with very
few complaints, I'm assuming that it constitutes a healthy application. Any
performance data I collect in regards to WSS could then be used to determine
whether our application performs well or not. All this extra work is
because I really don't know what kind of performance metrics I can expect
from a healthy application.

It was about at this point that I realized I don't know how to properly
benchmark my company's web application (or anybody else's, for that matter).
When I run the WCAT tests, I don't know how many users and computers I
should be modeling. Obviously I need to simulate how many users I expect to
hit the site in a day, but is it as simple as taking that number and
dividing it by the number of hours * minutes * seconds in a day to obtain
how many requests should be serviced on a per-second basis?

The next problem I have is determining which performance counters I need to
be looking at on the server running IIS. I've been looking at the Web
Service Files/Sec and Requests/Sec counters along with a couple of the
Network Interface and Processor counters. Is this a good place to start,
completely the wrong place, or should I be adding additional counters? For
that matter, are there other locations I should be looking to gather
performance metrics?

I guess my last question would be in regards to the WCAT test tool itself.
Is this a good tool to use to generate benchmark data or is there something
better (and hopefully free) I can use? I've been limiting my focus to
Windows based tools because I have several Windows servers readily
available, but if there's a good Linux based tool, I certainly don't mind
switching.

I could really use some expert input here. Although it would be nice if our
company had a dedicated QA group to do this sort of testing properly,
they've instead handed this to someone who is only guessing as to the best
way to proceed (a.k.a. me). I would greatly appreciate any help anybody
could offer.

Thanks,

DLN

Re: What"s the best way to measure performance for a web application?

am 15.04.2008 05:17:29 von David Wang

On Apr 14, 6:47=A0pm, "dln" wrote:
> I'm hoping that this is the proper place to ask a question regarding
> performance/benchmarking testing for an .Net application running on W2K3/I=
IS
> 6. =A0If this isn't the correct forum, I apologize ahead of time.
>
> I'm looking for the best approach to accurately measuring performance for =
a
> web based application. =A0However I'm at a complete loss as to what to mea=
sure
> and what measurements constitute a healthy, good performing application.
>
> I've done a bit of digging around and have downloaded the WCAT testing too=
l
> from the IIS Resource Kit. =A0I decided that I could use a stock install o=
f
> WSS 3.0 to get a baseline, against which I could measure our application's=

> performance. =A0Even here I might be taking the wrong approach, but the WS=
S
> site has a good mix of web/DB transactions, which is similar to what our
> in-house app does. =A0Since SharePoint sees regular use in industry with v=
ery
> few complaints, I'm assuming that it constitutes a healthy application. =
=A0Any
> performance data I collect in regards to WSS could then be used to determi=
ne
> whether our application performs well or not. =A0All this extra work is
> because I really don't know what kind of performance metrics I can expect
> from a healthy application.
>
> It was about at this point that I realized I don't know how to properly
> benchmark my company's web application (or anybody else's, for that matter=
).
> When I run the WCAT tests, I don't know how many users and computers I
> should be modeling. =A0Obviously I need to simulate how many users I expec=
t to
> hit the site in a day, but is it as simple as taking that number and
> dividing it by the number of hours * minutes * seconds in a day to obtain
> how many requests should be serviced on a per-second basis?
>
> The next problem I have is determining which performance counters I need t=
o
> be looking at on the server running IIS. =A0I've been looking at the Web
> Service Files/Sec and Requests/Sec counters along with a couple of the
> Network Interface and Processor counters. =A0Is this a good place to start=
,
> completely the wrong place, or should I be adding additional counters? =A0=
For
> that matter, are there other locations I should be looking to gather
> performance metrics?
>
> I guess my last question would be in regards to the WCAT test tool itself.=

> Is this a good tool to use to generate benchmark data or is there somethin=
g
> better (and hopefully free) I can use? =A0I've been limiting my focus to
> Windows based tools because I have several Windows servers readily
> available, but if there's a good Linux based tool, I certainly don't mind
> switching.
>
> I could really use some expert input here. =A0Although it would be nice if=
our
> company had a dedicated QA group to do this sort of testing properly,
> they've instead handed this to someone who is only guessing as to the best=

> way to proceed (a.k.a. me). =A0I would greatly appreciate any help anybody=

> could offer.
>
> Thanks,
>
> DLN


You ask some very thoughtful questions.

You have rightfully realized that you need to first establish a
baseline with a proper workload traffic pattern. However, you don't
want to do this against WSS to compare numbers because that's like
comparing apples and oranges. Unless your web application is
functionally comparable to WSS, reusing the same traffic pattern does
not make sense -- you may be hitting different "hot set", "cold set",
and "steady state" conditions that make results incomparable.

Furthermore, for performance testing you need to use as many users and
computers as necessary to max-out the measured metric for your
application. If you do not max-out, you

In other words, you need to determine answers to all your questions
because it completely depends on your application. Benchmarking is all
about determining "with hardware configuration H and software
configuration S, if I sent application A with workload W, I get
results R for metric M". And the set of metrics differ depending on
what you are trying to prove. And you get to define all the
conditions.

For example, some of the things people tend to like looking for
include:
1. Request Throughput - how long does it take to execute an average
request. Of course, you know that not all requests are equal. So, it
heavily depends on how you setup the request mix -- which is basically
whatever you think reflects reality for your web application. Often
during this testing, you find things like "if more than 10% of users
are requesting the slow database pages, throughput goes way down even
though CPU is still underutilized" (indicating contention in the
application).

At this point, you can either get the issue fixed, or try to scope
your request mixes with <10% users hitting database pages and then try
to rationalize it. Of course, users ultimately decide whether your
rationalization makes sense or not, but you've covered your bases by
doing the investigation and pointing out the issue. Of course, you may
have to defend your modified request mix, but that's the usual "blame
game" when you find problems that don't get fixed.

2. Max Concurrent Requests Processing -- how many CONCURRENT requests
can be executing? This is often linked to #1 in applications that
perform poorly when scaling. Since it is always possible to use enough
clients to swamp a server, you have to fiddle around with the number
of clients (each client is considered one independent connection) to
make sense of this.

3. CPU/RAM/Network Utilization -- how much does average workload
consume in resources? Frequently people want this number to determine
how many servers to buy and provision to run an application. However,
since you realize that the value of a metric depends on the hardware
and configuration, so this number is really hard to re-use.

As for perf counters -- all of the above are a part of the OS. The
relevant counters completely depend on your application's request mix
as well as intended metric of interest.

In short, if you are looking for a tool to just install, run, and get
results for benchmark, then you are going to be disappointed. The tool
just allows you to make the measurement. You still have to determine
the appropriate workload W and Metric M for your application A, and
that is usually pretty unique -- and caveat the measurement as results
of hardware H and software configuration S.


//David
http://w3-4u.blogspot.com
http://blogs.msdn.com/David.Wang
//