Re: Putting a throttle on Apache (CSWS), or all of TCP
am 22.11.2007 02:22:02 von John Wallace
"Keith Lewis" wrote in message
news:d5e40952-c9f6-44e1-9037-158f90fbefc8@b15g2000hsa.google groups.com...
> I have one of my VMS boxes set up to record and save a certain radio
> show. As a service to my fellow fans out of the area, I make it
> available for download on the web, over my relatively slow DSL
> connection.
Is there a particular reason you do it this way rather than uploading it to
some web/ftp space hosted somewhere where bandwidth is less of an issue?
Cost may be one reason. Time is money. You can fix this one of two ways;
spend time, or spend money (option 3, spend both, also applies).
>
> The problem is I got a new ethernet switch.
Understood. Does it replace a previous one, or is the network config now
different? If config unchanged, can you put the old switch back, what
behaviour is observed?
> which seems to have a big
> buffer.
That's a bit of a surprise. Need to see some hard evidence if possible that
this really is the cause, unsure how VMS users get that (tcpdump equivalent?
Ethereal/wireshark?)
>It is killing the latency from my other machines.
Not 100% sure which machines are seeing excessive latency, and/or to where?
Intra-LAN latencies from anything on the LAN to anything on the LAN are
high? VMS to outside world latencies are high while LAN latencies are OK?
Does the latency depend on whether anybody's actively copying the
files/streams from you? Different situations, fix may depend on which one it
is.
>
> The ideal solution to this (short of IPv6 packet prioritization) would
> be to limit the Apache web server to a certain fixed bandwidth lower
> than my total available on DSL, so then the buffer would not fill up.
>
IPv6. It's the next big thing. ISTR it was the next big thing in the 1980s
(shortly after OSI networks solved the same classes of problem but went
nowhere despite DEC's efforts) and IPv6 is still the next big thing :) Do
any ISPs in the US actually support IPv6? The UK didn't have many last time
I looked (a year or three ago).
Anyway, IPv6 or not, upstream bandwidth management is a popular approach to
managing a DSL connection so that upstream traffic (from you to the outside
world) does not slow down (or seriously increase latency for) your
downstream traffic, but this is more usually related to "ack starvation",
which wouldn't necessarily be related to a new switch with big buffers.
> A less desireable solution would be to put the bandwidth limit on
> TCPIP Services for OpenVMS as a whole.
>
Indeed. Sledgehammer to crack nut, probably even if this has to be a "pure
VMS" solution.
What other resources are available to you on the LAN? What DSL router are
you using? Does it have a built-in switch?
Y'know, given the picture as described, the first thing I'd do would be to
*closely* observe any LEDs on NICs and switch ports, and maybe keep an eye
on any VMS "line up"/"line down" (or is it circuit up/circuit down) events,
to make sure that the traditional silliness associated with auto-negotiate
and incompatible physical layer implementations isn't occuring here. If it
is occuring, the typical symptom is a short burst of traffic, a slightly
longer pause while (re)negotiation takes place, a burst of traffic, a pause,
etc. This isn't *quite* the same as steady traffic with daft latency, but
may occasionally appear that way, so please accept my apologies if it's not
what you're seeing. If it is occuring, you can try to fix it by forcing the
VMS NICs to a fixed speed of your choice. I'm not 100% sure that's a
guaranteed fix though, especially as your unmanaged switch won't have a
fixed speed option.
You also might want to temporarily connect your VMS web box direct to your
DSL router, *without* going via the new Trendnet switch, if this is an
option. Similarly, if the DSL router has a built in switch (as many do), you
might want to try moving stuff between the DSLrouter switch and the Trendnet
switch, see if anything improves. I've seen cases where cascaded switches
don't behave as you'd hope (a D-Link and something else), and cases where
they do (a pair of Netgear FS10x), and this reconfiguration may also help
eliminate any issues with incompatible autonegotiate setups.
Assuming it's not autonegotiate silliness, if there was scope for a
Linux-based router (either an actual SoHo router reflashed, or a
PC/oldlaptop, or similar) I'd probably look at Linux and some of the "DSL
bandwidth management howtos" scattered around the Internerd (based on
IPtables and QoS and stuff). Or at least seek advice from someone who
understands these things properly (I don't). Another option could be to look
at DSL routers supplied with "Turbo TCP" (Westell's name) or something
equivalent, which is allegedly a ready-made solution to the problem of "ack
starvation".
If it has to be a VMS-based solution, JF has some interesting thoughts
around using 2 separate NICs, and reducing what the rest of the world calls
RWIN. Another possible option might to be to force a low MTU for traffic
going via your DSL router, while retaining default MTU for on-LAN traffic,
which *might* give you the necessary control. Not sure of the exact TCP
services incantation for that, but if TCP services can't do it, a decent DSL
router (with configurable MTU) should be able to do it instead, so long as
Path MTU Discovery is working correctly between you and the folks you
connect to. In fact if your router can easily change its MTU, this might be
something to try earlier rather than later.
Best of luck
John Wallace
Re: Putting a throttle on Apache (CSWS), or all of TCP
am 23.11.2007 08:28:45 von John Wallace
"Keith Lewis" wrote in message
news:61688852-fcec-4438-93ed-2856fad6d308@i12g2000prf.google groups.com...
> On Nov 21, 8:22 pm, "John Wallace"
> wrote:
> > "Keith Lewis" wrote in message
> >
> >
news:d5e40952-c9f6-44e1-9037-158f90fbefc8@b15g2000hsa.google groups.com...
> >
> > > I have one of my VMS boxes set up to record and save a certain radio
> > > show. As a service to my fellow fans out of the area, I make it
> > > available for download on the web, over my relatively slow DSL
> > > connection.
> >
> > Is there a particular reason you do it this way rather than uploading it
to
> > some web/ftp space hosted somewhere where bandwidth is less of an issue?
> > Cost may be one reason. Time is money. You can fix this one of two ways;
> > spend time, or spend money (option 3, spend both, also applies).
> >
> >
> >
> > > The problem is I got a new ethernet switch.
> >
> > Understood. Does it replace a previous one, or is the network config now
> > different? If config unchanged, can you put the old switch back, what
> > behaviour is observed?
> >
> > > which seems to have a big
> > > buffer.
> >
> > That's a bit of a surprise. Need to see some hard evidence if possible
that
> > this really is the cause, unsure how VMS users get that (tcpdump
equivalent?
> > Ethereal/wireshark?)
> >
> > >It is killing the latency from my other machines.
> >
> > Not 100% sure which machines are seeing excessive latency, and/or to
where?
> > Intra-LAN latencies from anything on the LAN to anything on the LAN are
> > high? VMS to outside world latencies are high while LAN latencies are
OK?
> > Does the latency depend on whether anybody's actively copying the
> > files/streams from you? Different situations, fix may depend on which
one it
> > is.
> >
> >
> >
> > > The ideal solution to this (short of IPv6 packet prioritization) would
> > > be to limit the Apache web server to a certain fixed bandwidth lower
> > > than my total available on DSL, so then the buffer would not fill up.
> >
> > IPv6. It's the next big thing. ISTR it was the next big thing in the
1980s
> > (shortly after OSI networks solved the same classes of problem but went
> > nowhere despite DEC's efforts) and IPv6 is still the next big thing :)
Do
> > any ISPs in the US actually support IPv6? The UK didn't have many last
time
> > I looked (a year or three ago).
> >
> > Anyway, IPv6 or not, upstream bandwidth management is a popular approach
to
> > managing a DSL connection so that upstream traffic (from you to the
outside
> > world) does not slow down (or seriously increase latency for) your
> > downstream traffic, but this is more usually related to "ack
starvation",
> > which wouldn't necessarily be related to a new switch with big buffers.
> >
> > > A less desireable solution would be to put the bandwidth limit on
> > > TCPIP Services for OpenVMS as a whole.
> >
> > Indeed. Sledgehammer to crack nut, probably even if this has to be a
"pure
> > VMS" solution.
> >
> > What other resources are available to you on the LAN? What DSL router
are
> > you using? Does it have a built-in switch?
> >
> > Y'know, given the picture as described, the first thing I'd do would be
to
> > *closely* observe any LEDs on NICs and switch ports, and maybe keep an
eye
> > on any VMS "line up"/"line down" (or is it circuit up/circuit down)
events,
> > to make sure that the traditional silliness associated with
auto-negotiate
> > and incompatible physical layer implementations isn't occuring here. If
it
> > is occuring, the typical symptom is a short burst of traffic, a slightly
> > longer pause while (re)negotiation takes place, a burst of traffic, a
pause,
> > etc. This isn't *quite* the same as steady traffic with daft latency,
but
> > may occasionally appear that way, so please accept my apologies if it's
not
> > what you're seeing. If it is occuring, you can try to fix it by forcing
the
> > VMS NICs to a fixed speed of your choice. I'm not 100% sure that's a
> > guaranteed fix though, especially as your unmanaged switch won't have a
> > fixed speed option.
> >
> > You also might want to temporarily connect your VMS web box direct to
your
> > DSL router, *without* going via the new Trendnet switch, if this is an
> > option. Similarly, if the DSL router has a built in switch (as many do),
you
> > might want to try moving stuff between the DSLrouter switch and the
Trendnet
> > switch, see if anything improves. I've seen cases where cascaded
switches
> > don't behave as you'd hope (a D-Link and something else), and cases
where
> > they do (a pair of Netgear FS10x), and this reconfiguration may also
help
> > eliminate any issues with incompatible autonegotiate setups.
> >
> > Assuming it's not autonegotiate silliness, if there was scope for a
> > Linux-based router (either an actual SoHo router reflashed, or a
> > PC/oldlaptop, or similar) I'd probably look at Linux and some of the
"DSL
> > bandwidth management howtos" scattered around the Internerd (based on
> > IPtables and QoS and stuff). Or at least seek advice from someone who
> > understands these things properly (I don't). Another option could be to
look
> > at DSL routers supplied with "Turbo TCP" (Westell's name) or something
> > equivalent, which is allegedly a ready-made solution to the problem of
"ack
> > starvation".
> >
> > If it has to be a VMS-based solution, JF has some interesting thoughts
> > around using 2 separate NICs, and reducing what the rest of the world
calls
> > RWIN. Another possible option might to be to force a low MTU for traffic
> > going via your DSL router, while retaining default MTU for on-LAN
traffic,
> > which *might* give you the necessary control. Not sure of the exact TCP
> > services incantation for that, but if TCP services can't do it, a decent
DSL
> > router (with configurable MTU) should be able to do it instead, so long
as
> > Path MTU Discovery is working correctly between you and the folks you
> > connect to. In fact if your router can easily change its MTU, this might
be
> > something to try earlier rather than later.
> >
> > Best of luck
> > John Wallace
>
> OK, more details on the network change that (I assume) led to the
> problem.
>
> I built myself a new Linux box. The switch in my Linksys router was
> already full, so I bought an additional 8-port switch.
>
> Since the new Linux system has a second ethernet interface, I seized
> the opportunity to set it up as a network monitor. I moved all my
> computers to the new switch and connected the switch to the router
> through an old 10Mb half-duplex hub. Then I connected the second
> interface of the new Linux box to the dumb hub, so it gets mirrored
> copies of every external packet.
>
> The latency problem only happens when somebody is doing a big download
> from my web site, and it only affects external communications. I
> notice it mostly from my Windows PC, which I use for gaming and VNC
> over VPN. If I fire up etherape on the Linux system, I see a large,
> steady, volume of outgoing web traffic. If I shut down Apache on VMS,
> or if the download finishes, the problem clears up.
>
> Failed auto-negotiation is a potential problem here. Probably not on
> my alphas but on the dumb hub. If I'm reading the LEDs on the router
> right, it has correctly selected 10-half for its connection to that
> hub. The Trendnet switch on the other side doesn't give as much
> info.
>
> The idea of testing what happens when the VMS server is plugged
> directly into the router is a good one. I hesitate to do that because
> then I'd be sending my LAVC traffic through the 10Mb hub. I may need
> what JF suggested -- a second interface on the server. Plug one into
> the dumb hub for web-server traffic and leave the other on the switch
> for internal use.
>
> Hosting the files on a bigger site wouldn't help much. It would add a
> delay for the people I'm trying to help out, and it would still use
> lots of bandwidth while it was uploading to that site for a couple of
> hours a day. At least I could control when it ran.
>
> Dirk, thanks for the hardware suggestion.
>
> Shimmyshack, I'll take a look at mod_bw.
Thanks for the detailed update.
The symptoms do now sound probably consistent with maxing out your upstream
DSL bandwidth while folks are downloading from you. The good news is that
you seem to have all the ingredients in place to fix it without major
investments of time or money, possibly without fussing about auto-negotiate
or whether switch->hub connections are behaving badly, and possibly without
needing a 2nd NIC for LAVC traffic too (the 2nd NIC is a fine idea in
principle, and in a production environment, but how much LAVC traffic is
really in this picture?)
From my understanding of the updated picture, I still have two little
puzzles:
1) Hosting a website on a DSL line isn't ideal when there's significant bulk
traffic going on. External hosting would mean that you only ever "upload"
(from your LAN to external host) one copy of the data being hosted. E.g. if
two (or more) people download the radio programme via DSL, you've saved on
your DSL traffic (in bytes) by using external hosting. If two (or more)
people download the radio programme *simultaneously* via DSL, you're saving
on your DSL *bandwidth* (in bits/second) by using external hosting. Your
users may also get faster downloads from an external hosting service than
they would from you. Obviously you've got to pay for external hosting, but
given your apparent need to throttle/deprioritise your upstream bulk traffic
to maintain decent service for your interactive apps, something is going to
have to suffer, and (afaict) the least suffering overall occurs if you move
your website off your LAN to an external host, and then you control the way
the website is updated and you and your website's users aren't quite so
constrained by your limited DSL bandwidth.
2) I'm not sure why you wouldn't plan to retire the old faithful 10Mbit hub,
and use the dual-networked Linux box as a firewall/router/QoS box, with one
NIC connected to the new switch (with all the computers on it) and one NIC
as the only device connected directly to your DSL box. The Linux box would
seem to have all the right bits (in principle) to monitor *and control*
traffic between your LAN and the outside world, and all it would need is
time+expertise to set it up. You could do this whether or not you choose to
go for external hosting of your website. It does make your Linux box a
single point of failure in your outside world connection, but if your
website was hosted externally your users wouldn't be affected if your Linux
box was down, downtime would only affect you?
You have the full picture, I don't, but that's the way it currently looks
from here.
Regards
John