home | list info | list archive | date index | thread index

Re: [OCLUG-Tech] is "xargs" really still useful just for limiting command line size?

  • Subject: Re: [OCLUG-Tech] is "xargs" really still useful just for limiting command line size?
  • From: James <bjlockie [ at ] lockie [ dot ] ca>
  • Date: Fri, 16 Mar 2018 14:46:57 +0000 (UTC)
It is necessary for running a command on all files matching a pattern.
Wildcarding substitutes the full file names of each matching result of the wildcard on the command line. Thousands of matching names/paths can easily blow up the buffer.
I think it works that way. :-)


On March 16, 2018 9:27:35 AM "Robert P. J. Day" <rpjday [ at ] crashcourse [ dot ] ca> wrote:


  course i taught recently had a section on "xargs", emphasizing its
value(?) in being able to run a command in bite-size pieces but, these
days, is that really that much of an issue?

  IIRC (and i might not), the historical limiting factor for command
line length was the limit of an internal buffer in the shell that was
used to build the command to be run, and it used to be fairly small
(5000 bytes?). these days, i'm fairly sure bash can handle far longer
commands than that.

  now i can see the obvious value of xargs in that it supports a ton
of cool options like defining the delimiter to be used in parsing the
input stream and so on, but WRT simply limiting the command line size,
rather than something like this:

  $ find . -type f -name core | xargs rm -f

i would simply assume i can generate a really long command and write:

  $ rm -f $(find . -type f -name core)

and, yes, there's always "find .... -exec rm -f {} \;" and so on. but
does bash these days have any need for simple command line limiting?
and what would that limit be, anyway?

rday
_______________________________________________
Linux mailing list
Linux [ at ] lists [ dot ] oclug [ dot ] on [ dot ] ca
http://oclug.on.ca/mailman/listinfo/linux