Large-scale spidering
am 14.04.2006 04:22:47 von marvin
Greets,
I've written an industrial-strength search engine library for Perl
(KinoSearch), and now I have clients who want me to work on a large-
scale spidering app for them. Sort of like Nutch for Perl (
lucene.apache.org/nutch>). Putch. :)
What efforts have already been undertaken in this area? A survey of
existing CPAN releases that I should study would be great. I've
written a small-scale spider using LWP::RobotUA. I've scanned over
the WWW::Mechanize docs, but don't yet grasp its full capabilities.
What else?
Thanks,
Marvin Humphrey
Rectangular Research
http://www.rectangular.com/
Re: Large-scale spidering
am 15.04.2006 02:31:45 von merlyn
>>>>> "Marvin" == Marvin Humphrey writes:
Marvin> I've written an industrial-strength search engine library for Perl
Marvin> (KinoSearch), and now I have clients who want me to work on a large-
Marvin> scale spidering app for them. Sort of like Nutch for Perl (
Marvin> lucene.apache.org/nutch>). Putch. :)
The original Inktomi spider was written in Perl, and even available as public
open source for a while. "Back in the day."
Is this for a private network? Please tell me you're not trying to build Yet
Another Spider to visit my site! If so, please use the Google or Yahoo API to
leverage the fact that they've already done an excellent job of visiting a few
thousand URLs, including 40,000 images. You don't need to come here.
--
Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095
Perl/Unix/security consulting, Technical writing, Comedy, etc. etc.
See PerlTraining.Stonehenge.com for onsite and open-enrollment Perl training!
Re: Large-scale spidering
am 18.04.2006 20:07:52 von marvin
Hello, Randal,
Ironically, OS X mail flagged your post as spam, and I've only just
found it.
On Apr 14, 2006, at 5:31 PM, Randal L. Schwartz wrote:
> The original Inktomi spider was written in Perl, and even available
> as public
> open source for a while. "Back in the day."
I've been unable to hunt down that source code. Is it archived on
any public site?
> Is this for a private network? Please tell me you're not trying to
> build Yet
> Another Spider to visit my site!
This specific spider wouldn't visit stonehenge.com, as the subject
matter wouldn't be relevant. But I take it that you mean "my site"
in a collective sense, and that you're objecting philosophically to
the potential increase in average server load if the web spider
population explodes. Am I reading you right?
I'm afraid I think that web spiders are going to multiply regardless
of what I do personally. At this time, I'm dedicated to KinoSearch
and I have zero interest in publishing and maintaining a "Putch", but
Nutch exists, it's getting better, and other competitors to it are
going to appear.
Some of these spiders are going to obey robots.txt, some won't.
Certainly any of the ones I write will.
> If so, please use the Google or Yahoo API to
> leverage the fact that they've already done an excellent job of
> visiting a few
> thousand URLs, including 40,000 images. You don't need to come here.
Another option is the Alexa database, from which you can grab data
for a per-GB fee.
http://websearch.alexa.com/docs/price_guide.html
However, the Alexa crawl data may not contain the pages you want, or
update frequently enough to meet your needs.
If you want your web search site to compete with Google/Yahoo, it's
doubtful that you would want to farm out the task of spidering. Of
course that's true for major Google competitors such as
search.msn.com and ask.com. More to the point, though, is that you
may not want to farm out spidering even if you are competing only in
a narrow niche.
http://wiki.apache.org/nutch/PublicServers
Marvin Humphrey
Rectangular Research
http://www.rectangular.com/