home | list info | list archive | date index | thread index

Re: [OCLUG-Tech] is "xargs" really still useful just for limiting command line size?

  • Subject: Re: [OCLUG-Tech] is "xargs" really still useful just for limiting command line size?
  • From: Peter Sjöberg <peters-oclug [ at ] techwiz [ dot ] ca>
  • Date: Tue, 20 Mar 2018 11:45:48 -0400
On 2018-03-16 09:27 AM, Robert P. J. Day wrote:
> 
>   course i taught recently had a section on "xargs", emphasizing its
> value(?) in being able to run a command in bite-size pieces but, these
> days, is that really that much of an issue?
> 
>   IIRC (and i might not), the historical limiting factor for command
> line length was the limit of an internal buffer in the shell that was
> used to build the command to be run, and it used to be fairly small
> (5000 bytes?). these days, i'm fairly sure bash can handle far longer
> commands than that.
> 
>   now i can see the obvious value of xargs in that it supports a ton
> of cool options like defining the delimiter to be used in parsing the
> input stream and so on, but WRT simply limiting the command line size,
> rather than something like this:
> 
>   $ find . -type f -name core | xargs rm -f
And that will fail if it's to many "core" files found

> 
> i would simply assume i can generate a really long command and write:
> 
>   $ rm -f $(find . -type f -name core)
> 

fails for same reason (but "find -delete" would work)
Now what if you have a million files to delete or you want to delete all
files with a specific pattern.

  ls -f|grep "_201[0-7]-"|xargs -n1000 rm

with that you list all files in some unsorted order (=doesn't take long
even if it is a million files), find all files with 2010-2017 in the
name and  limit the number of files passed to rm to 1000 so the command
line won't overflow.
I use "xargs -n<number>" very often in my scripts.

Another use is when I just want it all in one line

  some_command|sort|xargs

so I can copy the output to some other place without having to unwrap it.

/ps