Path to this page:
./
www/crawl,
Small and efficient HTTP crawler
Branch: pkgsrc-2017Q4,
Version: 0.4nb12,
Package name: crawl-0.4nb12,
Maintainer: pkgsrc-usersThe crawl utility starts a depth-first traversal of the web at the specified
URLs. It stores all JPEG images that match the configured constraints.
Crawl is fairly fast and allows for graceful termination. After terminating
crawl, it is possible to restart it at exactly the same spot where it was
terminated. Crawl keeps a persistent database that allows multiple crawls
without revisiting sites.
The main features of crawl are:
* Saves encountered images or other media types
* Media selection based on regular expressions and size contraints
* Resume previous crawl after graceful termination
* Persistent database of visited URLs
* Very small and efficient code
* Asynchronous DNS lookups
* Supports robots.txt
Required to build:[
pkgtools/cwrappers]
Master sites:
SHA1: b53be27b572ba6a88ab80243b177873aed0b314b
RMD160: c86898b66c661e6b841170114deba4d8f076651d
Filesize: 108.48 KB
Version history: (Expand)
- (2018-01-02) Package added to pkgsrc.se, version crawl-0.4nb12 (created)