home | list info | list archive | date index | thread index

Re: [OCLUG-Tech] is "xargs" really still useful just for limiting command line size?

  • Subject: Re: [OCLUG-Tech] is "xargs" really still useful just for limiting command line size?
  • From: J C Nash <profjcnash [ at ] gmail [ dot ] com>
  • Date: Fri, 16 Mar 2018 09:49:10 -0400
This may introduce a tangent, but I find the limit is often not the
coded one but issues related to
 - long lines wrapping round and becoming difficult to read
 - inevitable fumble fingers
 - getting 250 characters into the line and not remembering whether the
   parameter should be X or x and what the difference is.

At next OCLUG I'll likely be leading a discussion of what makes good
scripts, looking at both command line and GUI options. I've recently
been finding some success writing moderately long scripts, then
interfacing to them with Double Commander, where I can create buttons
that have informative mouse overs, and I can go look at the actual
script if I want.

The main benefits for me are
1) I don't make so many typing errors in path arguments. DC picks up
selected files and also the path of open left and right panes of the
file manager
2) my wife actually has got quite enthused
about using such tools, when for more than a couple of decades, I've
had the two syllable Joooo oohn   whenever a script had to be executed
and she could not remember it.

A second issue is that if you work assuming a long buffer, next week
you'll be helping someone and their buffer will be 111 characters as it
was in the old MS DOS. If I'm wrong in detail, I do know it was a very
strange number. And if you happen to be working on CD/DVD filesystems, as I
was recently, there are filename lengths to bite you.

Best, JN

On 2018-03-16 09:27 AM, Robert P. J. Day wrote:
> 
>   course i taught recently had a section on "xargs", emphasizing its
> value(?) in being able to run a command in bite-size pieces but, these
> days, is that really that much of an issue?
> 
>   IIRC (and i might not), the historical limiting factor for command
> line length was the limit of an internal buffer in the shell that was
> used to build the command to be run, and it used to be fairly small
> (5000 bytes?). these days, i'm fairly sure bash can handle far longer
> commands than that.
> 
>   now i can see the obvious value of xargs in that it supports a ton
> of cool options like defining the delimiter to be used in parsing the
> input stream and so on, but WRT simply limiting the command line size,
> rather than something like this:
> 
>   $ find . -type f -name core | xargs rm -f
> 
> i would simply assume i can generate a really long command and write:
> 
>   $ rm -f $(find . -type f -name core)
> 
> and, yes, there's always "find .... -exec rm -f {} \;" and so on. but
> does bash these days have any need for simple command line limiting?
> and what would that limit be, anyway?
> 
> rday
> _______________________________________________
> Linux mailing list
> Linux [ at ] lists [ dot ] oclug [ dot ] on [ dot ] ca
> http://oclug.on.ca/mailman/listinfo/linux
>