Next | Query returned 16 messages, browsing 11 to 20 | previous

History of commit frequency

CVS Commit History:


   2005-09-28 22:59:08 by Stoned Elipot | Files touched by this commit (3)
Log message:
Set AWK properly for NetBSD. Noticed by mrg@.

Bump PKGREVISION.
   2005-09-28 22:52:28 by Roland Illig | Files touched by this commit (180)
Log message:
Replaced "# defined" with "yes" in Makefile variables like \ 
GNU_CONFIGURE,
NO_BUILD, USE_LIBTOOL.
   2005-04-11 23:48:17 by Todd Vierling | Files touched by this commit (3539)
Log message:
Remove USE_BUILDLINK3 and NO_BUILDLINK; these are no longer used.
   2005-02-24 14:41:00 by Alistair G. Crooks | Files touched by this commit (190)
Log message:
Add RMD160 digests.
   2005-02-09 09:40:33 by Stoned Elipot | Files touched by this commit (3)
Log message:
Change default TMPDIR to more modern /var/tmp instead of /usr/tmp.
Claim stewardship.

Bump PKGREVISION to 1.
   2004-07-14 13:41:52 by Alistair G. Crooks | Files touched by this commit (4) | Imported package
Log message:
Initial import of vip, a script which lets you edit data (via $EDITOR
or $VISUAL) at any point in a pipe. From a nudge from David Maxwell.

	Normally, in a pipeline, when you need to edit some phase of the data
	stream, you use a standard tool such as sed, grep, or awk to alter,
	filter, or otherwise manipulate the stream. One potential problem with
	this approach is that the manipulations have to be very well thought out
	in advance. Another is that the manipulations will probably need to be
	applied uniformly. And third, the data must be very well understood in
	advance. Not all situations and data easily conform to these
	constraints.

	Alternatively, when the changes needed for the data are more than
	trivial, or perhaps you just don't feel like expending the mental energy
	needed to work out all the expressions in advance, a typical approach
	might be to run some process or pipeline, dump output to a file, edit
	the file with vi, pico, or emacs, then push the data along to the next
	phase by using the file as input to some additional process or pipeline.
	The catch here - other than the sheer awkwardness of this process - is
	that you have to remember to come back later and clean up all of those
	little and not-so-little "temporary" files.

	So, wouldn't you just like to be able to tap in an edit session at any
	arbitrary point in the pipeline, do your magic on the data, then have it
	automagically continue on its merry way? The vip program provides this
	functionality, and operates syntactically just like any other filter.

Next | Query returned 16 messages, browsing 11 to 20 | previous