course i taught recently had a section on "xargs", emphasizing its value(?) in being able to run a command in bite-size pieces but, these days, is that really that much of an issue? IIRC (and i might not), the historical limiting factor for command line length was the limit of an internal buffer in the shell that was used to build the command to be run, and it used to be fairly small (5000 bytes?). these days, i'm fairly sure bash can handle far longer commands than that. now i can see the obvious value of xargs in that it supports a ton of cool options like defining the delimiter to be used in parsing the input stream and so on, but WRT simply limiting the command line size, rather than something like this: $ find . -type f -name core | xargs rm -f i would simply assume i can generate a really long command and write: $ rm -f $(find . -type f -name core) and, yes, there's always "find .... -exec rm -f {} \;" and so on. but does bash these days have any need for simple command line limiting? and what would that limit be, anyway? rday