sc-status on live server
am 29.06.2007 10:52:27 von JimLad
Hi,
I've just started using Log Parser on our web app. It has never really
been looked at from a performance point of view before.
For my analysis of slow running pages I have used the filter sc-status
= 200. Is this valid?
I have also done a count of the different statuses which are occuring.
Here they are:
sc-status count
--------- ------------
401 95798
200 86081
302 352
304 1484
404 702
500 5
403 48
Is there anything there that I should worry about? We're using
Kerberos, if that makes any difference.
Cheers,
James
Re: sc-status on live server
am 29.06.2007 12:55:17 von David Wang
On Jun 29, 1:52 am, JimLad wrote:
> Hi,
>
> I've just started using Log Parser on our web app. It has never really
> been looked at from a performance point of view before.
>
> For my analysis of slow running pages I have used the filter sc-status
> = 200. Is this valid?
>
> I have also done a count of the different statuses which are occuring.
> Here they are:
>
> sc-status count
> --------- ------------
> 401 95798
> 200 86081
> 302 352
> 304 1484
> 404 702
> 500 5
> 403 48
>
> Is there anything there that I should worry about? We're using
> Kerberos, if that makes any difference.
>
> Cheers,
>
> James
It all depends on how worrisome you are.
If you want to analyze status codes, you want to look at it for each
URL. No need to mush authenticated and anonymous URLs into the same
count because their access patterns will be different.
I'm not certain why slow pages have any correlation with sc-
status=200. I'd think that looking at the count of pages whose time-
taken is above a certain threshold is more interesting -- it tell you
which pages are taking a long time and how relatively frequent they
are.
For example, the following query in LogParser finds all URLs in
%LOGFILENAME% which took more than 10 seconds to complete and sort
them in descending frequency order along with relative percentage of
such long-running requests. It allows you to analyze "out of all
requests that take more than 10 seconds to complete, URL-A consists of
Y% of them and took X seconds on average.
SELECT
COUNT(*) AS Hits,
MUL(PROPCOUNT(*),100) AS Percentage,
AVG(time-taken) AS AvgTimeTaken,
cs-uri-stem
FROM
%LOGFILENAME%
WHERE
time-taken>='10000'
GROUP BY
cs-uri-stem
ORDER BY
Hits DESC
Data analysis with LogParser is really open-ended and the usefulness
of any data depends completely on your creativity to identify useful
correlations and ability to crunch the numbers correctly. LogParser
just allows you to actually do it.
//David
http://w3-4u.blogspot.com
http://blogs.msdn.com/David.Wang
//
Re: sc-status on live server
am 29.06.2007 13:28:53 von JimLad
On Jun 29, 11:55 am, David Wang wrote:
> On Jun 29, 1:52 am, JimLad wrote:
>
>
>
>
>
> > Hi,
>
> > I've just started using Log Parser on our web app. It has never really
> > been looked at from a performance point of view before.
>
> > For my analysis of slow running pages I have used the filter sc-status
> > = 200. Is this valid?
>
> > I have also done a count of the different statuses which are occuring.
> > Here they are:
>
> > sc-status count
> > --------- ------------
> > 401 95798
> > 200 86081
> > 302 352
> > 304 1484
> > 404 702
> > 500 5
> > 403 48
>
> > Is there anything there that I should worry about? We're using
> > Kerberos, if that makes any difference.
>
> > Cheers,
>
> > James
>
> It all depends on how worrisome you are.
>
> If you want to analyze status codes, you want to look at it for each
> URL. No need to mush authenticated and anonymous URLs into the same
> count because their access patterns will be different.
>
> I'm not certain why slow pages have any correlation with sc-
> status=200. I'd think that looking at the count of pages whose time-
> taken is above a certain threshold is more interesting -- it tell you
> which pages are taking a long time and how relatively frequent they
> are.
>
> For example, the following query in LogParser finds all URLs in
> %LOGFILENAME% which took more than 10 seconds to complete and sort
> them in descending frequency order along with relative percentage of
> such long-running requests. It allows you to analyze "out of all
> requests that take more than 10 seconds to complete, URL-A consists of
> Y% of them and took X seconds on average.
>
> SELECT
> COUNT(*) AS Hits,
> MUL(PROPCOUNT(*),100) AS Percentage,
> AVG(time-taken) AS AvgTimeTaken,
> cs-uri-stem
> FROM
> %LOGFILENAME%
> WHERE
> time-taken>='10000'
> GROUP BY
> cs-uri-stem
> ORDER BY
> Hits DESC
>
> Data analysis with LogParser is really open-ended and the usefulness
> of any data depends completely on your creativity to identify useful
> correlations and ability to crunch the numbers correctly. LogParser
> just allows you to actually do it.
>
> //Davidhttp://w3-4u.blogspot.comhttp://blogs.msdn.com/David. Wang
> //- Hide quoted text -
>
> - Show quoted text -
Hi David. Thanks. I, of course, filter on time-taken > as
well, but use sc-status so I'm only looking at successful completions.
I was really asking whether that was sensible or not. My time-taken
averages seemed a bit wrong if I didn't do this.
As regards the 401 errors, anonymous access is disabled (it's an
intranet app using windows users). We use impersonation and Kerberos
delegation through to the backend db. Should I be concerned with all
these 401 messages?
Cheers,
James
Re: sc-status on live server
am 29.06.2007 23:36:54 von David Wang
On Jun 29, 4:28 am, JimLad wrote:
> On Jun 29, 11:55 am, David Wang wrote:
>
>
>
>
>
> > On Jun 29, 1:52 am, JimLad wrote:
>
> > > Hi,
>
> > > I've just started using Log Parser on our web app. It has never really
> > > been looked at from a performance point of view before.
>
> > > For my analysis of slow running pages I have used the filter sc-status
> > > = 200. Is this valid?
>
> > > I have also done a count of the different statuses which are occuring.
> > > Here they are:
>
> > > sc-status count
> > > --------- ------------
> > > 401 95798
> > > 200 86081
> > > 302 352
> > > 304 1484
> > > 404 702
> > > 500 5
> > > 403 48
>
> > > Is there anything there that I should worry about? We're using
> > > Kerberos, if that makes any difference.
>
> > > Cheers,
>
> > > James
>
> > It all depends on how worrisome you are.
>
> > If you want to analyze status codes, you want to look at it for each
> > URL. No need to mush authenticated and anonymous URLs into the same
> > count because their access patterns will be different.
>
> > I'm not certain why slow pages have any correlation with sc-
> > status=200. I'd think that looking at the count of pages whose time-
> > taken is above a certain threshold is more interesting -- it tell you
> > which pages are taking a long time and how relatively frequent they
> > are.
>
> > For example, the following query in LogParser finds all URLs in
> > %LOGFILENAME% which took more than 10 seconds to complete and sort
> > them in descending frequency order along with relative percentage of
> > such long-running requests. It allows you to analyze "out of all
> > requests that take more than 10 seconds to complete, URL-A consists of
> > Y% of them and took X seconds on average.
>
> > SELECT
> > COUNT(*) AS Hits,
> > MUL(PROPCOUNT(*),100) AS Percentage,
> > AVG(time-taken) AS AvgTimeTaken,
> > cs-uri-stem
> > FROM
> > %LOGFILENAME%
> > WHERE
> > time-taken>='10000'
> > GROUP BY
> > cs-uri-stem
> > ORDER BY
> > Hits DESC
>
> > Data analysis with LogParser is really open-ended and the usefulness
> > of any data depends completely on your creativity to identify useful
> > correlations and ability to crunch the numbers correctly. LogParser
> > just allows you to actually do it.
>
> > //Davidhttp://w3-4u.blogspot.comhttp://blogs.msdn.com/David. Wang
> > //- Hide quoted text -
>
> > - Show quoted text -
>
> Hi David. Thanks. I, of course, filter on time-taken > as
> well, but use sc-status so I'm only looking at successful completions.
> I was really asking whether that was sensible or not. My time-taken
> averages seemed a bit wrong if I didn't do this.
>
> As regards the 401 errors, anonymous access is disabled (it's an
> intranet app using windows users). We use impersonation and Kerberos
> delegation through to the backend db. Should I be concerned with all
> these 401 messages?
>
> Cheers,
>
> James- Hide quoted text -
>
> - Show quoted text -
>Should I be concerned with all these 401 messages?
It all depends on how worrisome you are.
When you have non-anonymous authentication required, 401 will
naturally happen as people attempt authentication. It will also happen
when people try unauthorized access. I suspect you are not worried
about the former but worried about the latter, but realistically given
a count of 401s, you cannot tell the intent. Maybe you can try to
infer illegal activity through other means (such as grouping by cs-ip,
but that can be easily masked), but the effort depends on how
worrisome you are.
As for per-URL statistics, it all depends on what you are trying to
analyze. If your wanted to look at overall bandwidth consumed by a
URL, you would not filter by sc-status, but if you wanted to look at
bandwidth consumed by executing the URL to do work, you would filter
by sc-status because you probably consider successful execution to do
work differently than unsuccessful execution.
//David
http://w3-4u.blogspot.com
http://blogs.msdn.com/David.Wang
//