jots on linux/UNIX system administration, bash and perl -Tom Rodman

home
bashrc files
uqjau scripts
cygwin setup

site uses:

GNU Free, Libre, and Open Source Software (FLOSS) Licenses

GNU/copyleft.org

Copyleft and the GNU General Public License: A Comprehensive Tutorial

Bradley Kuhn (GPL license expert/enforcer) and lawyer Karen Sandler have a podcast that covers the copyleft licenses . Their podcast has been running years now, is called Free as in Freedom, and is hosted at http://faif.us/.

Why GNU matters, GNU history.

License overview/summary:

GNU SCM repo hosts for FLOSS projects

GNU date command

The GNU date command (part of coreutils package) has a wide range of options, including relative offset strings like tomorrow, yesterday, "2 weeks ago".. It supports some date math, and time zone conversions.

GNU date math examples

 ~ $ date
 Tue, Sep 28, 2010  3:32:17 PM
 ~ $ date --date '2 days ago'
 Sun, Sep 26, 2010  3:32:28 PM
 ~ $ date -d '6:00pm 2 days ago'
 Sun, Sep 26, 2010  6:00:00 PM
 ~ $ date --date '11am yesterday'
 Mon, Sep 27, 2010 11:00:00 AM
 ~ $ date --date '6pm tomorrow'
 Wed, Sep 29, 2010  6:00:00 PM
 ~ $ date --date "$(date --date 'next month' '+%m/1/%Y') -1 day"
 Thu, Sep 30, 2010 12:00:00 AM
 ~ $ : above is last day in month
 ~ $ date --date 'now +10 days'
 Fri, Oct 08, 2010  3:33:14 PM
 ~ $ date -d "1am +3 weeks" '+%H:%M %D'
 01:00 10/19/10
 ~ $ date --date 'Jan 10 00:00 -0600 - 1 hour - 50 min'
 Sat, Jan 09, 2010 10:10:00 PM
 ~ $ date --date "4:59:54 1 hour ago 53 min ago 46 sec ago"
 Tue, Sep 28, 2010  3:06:08 AM
 ~ $ date --date 'Dec 25'
 Sat, Dec 25, 2010 12:00:00 AM
 ~ $ date --date 'Jan 9 11pm + 1 hour'
 Sun, Jan 10, 2010 12:00:00 AM
 --snip
 ~ $ date
 Fri, Nov 19, 2010 10:18:49 AM
 ~ $ date --date "last sunday"
 Sun, Nov 14, 2010 12:00:00 AM
 ~ $ date --date "next tue"
 Tue, Nov 23, 2010 12:00:00 AM
 --snip/daylight savings
 $ date --date "3/14/2010 1:59am + 2 min"
 Sun, Mar 14, 2010  3:01:00 AM
 $ date --date "3/14/2010 1:59am + 1 min"
 Sun, Mar 14, 2010  3:00:00 AM
 $ date --date "3/15/2010 1:59am + 1 min"
 Mon, Mar 15, 2010  2:00:00 AM

time zone conversions, epoch sec

 ~ $ TZ=Asia/Calcutta date --date '7pm fri CDT'
 Sat, Oct 02, 2010  5:30:00 AM
 ~ $ TZ=Europe/Berlin date -d "1970-01-01 UTC $(TZ=America/Chicago date --date "6:15am" '+%s') sec"
 Tue, Sep 28, 2010  1:15:00 PM
 ~ $ date -d '1970-01-01 UTC 0 sec'
 Wed, Dec 31, 1969  6:00:00 PM
 ~ $ TZ=America/Chicago date -d '1970-01-01 UTC 0 sec'
 Wed, Dec 31, 1969  6:00:00 PM
 ~ $ TZ=America/New_York  date -d '1970-01-01 UTC 0 sec'
 Wed, Dec 31, 1969  7:00:00 PM

a few date output format examples

Many more formats are available, than shown here.

 ~ $ date +%-m/%-d/%Y
 9/28/2010
 ~ $ date '+%F_%H%M%S'
 2010-09-28_154837
 ~ $ date '+%a %F %T.%N'
 Tue 2010-09-28 15:49:31.362339100
 ~ $ date --date='25 Dec' +%j
 359

learning approach: gather all help in one file

For learning or reviewing complex tools, that take months to master, an approach I use is: to gather all the related help into a single vim edit session. For example, consider the tool 'gpg'. Here's the commandline I use to concatenate, the texinfo files, man pages, and selected help-webpages:

 true; ( set -x;: {{{;gpg2 --help; : }}};
   : {{{;_vwg http://www.gnupg.org/gph/en/manual.html;: }}};
   : {{{;_vwg http://www.dewinter.com/gnupg_howto/english/GPGMiniHowto.txt;
   : }}}; : {{{; zcat /usr/share/info/{gnupg.info*gz,pinentry.info.gz};
   : }}};: {{{; _m gpg2 ;: }}};: {{{; _m gpg-agent ;
   : }}} ) 2>&1 | 2v -i my-GPG-help 

'true' is there only for ease of mouse-selecting the text, for copy/pasting.

'set -x' let's you see which commands ran.

: {{{ and : }}} introduce vim folds, which place each help topic in a separate fold or block. In vim type ":help fold".

_vwg is a tool from uqjau which uses wget and pandoc to convert a webpage to markdown.

_m is 4 line bash function that runs "man "$@"|col -bx "; thus converting a man page to ASCII.

_2v ("to vim") is a personal bash function/filter that creates temp file with all the output content. It also creates a 1 line vim command in a file in a fixed location, that I source from within vim. So within in vim I can import the content using a 2 keystroke custom vim "leader command". In vim, type ":help leader".

ex 1 line command created by _2v:

    e /var/home/rodmant/.vim/tmp/2v.STDIN.my-GPG-help.SunJan04.0512.548

ex snip of output:

    $ head /var/home/rodmant/.vim/tmp/2v.STDIN.my-GPG-help.SunJan04.0512.548
    + : '{{{'
    + gpg2 --help
    gpg (GnuPG) 2.0.10
    libgcrypt 1.4.4
    Copyright (C) 2009 Free Software Foundation, Inc.
    --snip

"useful use of cat"- appending to predefined STDIN

Function "Bc" below, starts a 'bc' session, echoes commands to bc STDIN initially to set it's scale, and define a function, and then uses 'cat' to connect the starting shell's STDIN with bc, so you can interact w/bc (w/the keyboard for example).

 Bc()
 {
   : --------------------------------------------------------------------
   : Synopsis: Wrapper for 'bc'. Defines an exponential function
   : 'p (a,b) { return (e  ( l (a) * b )) }'
   : --------------------------------------------------------------------
   {
     echo 'define p (a,b) {
             return (e  ( l (a) * b ))
           }'
     echo scale=3
     cat
   }|
   bc -lq
 }                                    

SHELL background jobs

I like to minimize the number of shells I have open, so when a command takes for than 5 seconds, I background it; there are several approaches.

In the general case consider foo to be a built in command, or external command. Where noted 'foo' could represent a complex bash command, as in

  for x in a b c; do true|false|true; done

The simplist way to background is:

  foo&

This does not always work smoothly. In some shells foo will suspend itself if it generates STDOUT.

If you have permissions to run 'at', you can:

 echo foo|batch
 # or     
 echo foo | at now +45 min

 at 8am Sun <<\END
 foo -xyz
 for x in a b c; do true|false|true; done
 END

setsid will run the job in a separate process group from your current shell.

 setsid foo

 # or:

 setsid bash <<\END
 {
 du /var 
 date
 } > /tmp/var-df 2>&1
 END

The job will run in the bg, with no tty (no terminal), and no association with your shell session (it will not show up in 'jobs' output). With setsid, logging out of you shell session should never impact the job.

I have a script called '_bg' in uqjau, which is a wrapper for setsid.

 $ head -23 $_C/_bg
 #!/usr/bin/env bash

 # -------------------------------------------------------------------- 
 # Synopsis: Run simple command in background in separate
 # process session.  Will not be seen by your shell as a job.  Log
 # STDOUT and STDERR to file. Simple command => exactly 1 command
 # and it's args. 
 # -------------------------------------------------------------------- 
 # Usage: 
 #   ourname SIMPLE-COMMAND_HERE
 #   ourname - 
 #   ourname 
 #     #   (in last 2 cases above)  => shell script to run is from STDIN
 #     #   (complex shell commands OK) 
 # -------------------------------------------------------------------- 
 # Options:
 #   -l                  run in bash login shell w/ -i
 #   -e                  set pathname env vars per _29r_dirdefs output
 #   -o LOGPATHNAME
 #   -n JOBNAME          becomes part of log name
 #
 #   -W                  run nothing, but show recent logs
 # -------------------------------------------------------------------- 

I seldom use '_bg'. The simple workaround I use all day is in ~/.inputrc:

  "\C-xB": "\C-a(: set -x;: pwd; \C-e) < /dev/null 2>&1|ff &\C-b"
    # (works for both simple and complex commands)

    # For help on ~/.inputrc, see 'man bash' (Readline Initialization).

When I type:

  foo\C-xB
    # foo can be a complex bash commandline, with pipes, switches etc

  # result is:
  (: set -x;: pwd; foo) < /dev/null 2>&1|ff &
    # Remove the leading colons above for verbose runs.

By redirecting foo's STDIN to /dev/null, you prevent it from trying to access your tty. foo STDOUT and STDERR are piped to 'ff' which will log the job to a new tempfile; when foo completes 'ff' will beep, and rudely display the log pathname. If you use 'ff -i baz', then 'baz' will be part of the logfile name. ff is part of uqjau.

managing cron jobs

When one of my cron jobs fails, the wrapper script that had launched and logged it, places an appropriately named symbolic link to the log file, into a normally empty directory. Another cron job watches that dir and emails when a link exists, alerting one to the failed job, and positioning you to see the detailed log.

The wrapper script is called 'jobmon', and is part of uqjau. jobmon has a fair number of options. For example supports you passing in via it's args, another meta quoted shell commandline for the script you want to run.

analysis of [acm] file timestamps in /tmp ~/tmp dirs for tmpwatch

'/usr/sbin/tmpwatch' is typically run in cron to cleanup /tmp. Here is a man snip:

 If the --atime, --ctime or --mtime options are used in
 combination, the decision about deleting a file will be based
 on the maximum of these times.  The --dirmtime option implies
 ignoring atime of directories, even if the --atime option is
 used.

 -u, --atime
    Make  the  decision  about  deleting  a file based on the file's
    atime (access time). This is the default.
    
    Note that the periodic updatedb file system scans keep the atime
    of directories recent.

 -m, --mtime
    Make  the  decision  about  deleting  a file based on the file's
    mtime (modification time) instead of the atime.

 -c, --ctime
    Make the decision about deleting a  file  based  on  the  file's
    ctime (inode change time) instead of the atime; for directories,
    make the decision based on the mtime.

The last two args for tmpwatch are always: <hours> <dirs>; unfortunately the -u, -m, and -c all refer to the single argument <hours>.

In my personal (non root) crontab, I run a modified copy of the shell script /etc/cron.daily/tmpwatch:

 $ egrep 'flags=|days=|/usr/sbin/tmpwatch'  ~/bin/tmpwatch
 #flags=-umc
 flags=${tmpwatch_flags:--cm}
 days=${tmpwatch_days:-5}
 /usr/sbin/tmpwatch --verbose "$flags" $[24 * $days] "${@:-${HOME}/tmp}"

I suggest you study the timestamps in your tmp dirs to see if atimes or ctimes are being freshened by other processes; only after that should you finalize your tmpwatch <hours> argument and -u, -m, and -c switches.

Here I run my bash function '_tmpf_timestamps' to look at timestamps below ~/tmp:

 $ _tmpf_timestamps -c 10 ~/tmp
 Total non dirs: [114] in [/var/home/rodmant/tmp]          Dirs: [127]          Empty Dirs: [7]

 count of non dirs w/[mca] timestamp-age older than 'col 1'-days :

   i:  0    m:   114     c:   114     a:   114
   i:  1    m:    64     c:    46     a:    45
   i:  2    m:    54     c:    36     a:    36
   i:  3    m:    51     c:    33     a:    33
   i:  4    m:    40     c:    22     a:    22
   i:  5    m:    39     c:    21     a:    21
   i:  6    m:    39     c:    21     a:    21
   i:  7    m:    39     c:    21     a:    21
   i:  8    m:    39     c:    21     a:    21
   i:  9    m:    39     c:    21     a:    21
   i: 10    m:    39     c:    21     a:    21

My theory is that tmpwatch does not cleanup sockets or named pipes (the 21 items above).

 $ ls -lct $(find . ! -type d -ctime +5) |head -2
 srwxr-xr-x 1 jdoe crew 0 Oct 21 07:41 ./sock=
 prw-rw-rw- 1 jdoe crew 0 Feb 13  2014 ./_untartmp.dl.Irli3917/home/jdoe/s2f|
 $ file ./_untartmp.dl.Irli3917/home/torodman/s2f 
 ./_untartmp.dl.Irli3917/home/torodman/s2f: fifo (named pipe)

Here is my bash function '_tmpf_timestamps':

  /usr/local/etc/team/mke/iBASHrc $ _bashfunccodegrep _tmpf_timestamps < ./functions
  _tmpf_timestamps()
  {
    : --------------------------------------------------------------------
    : Synopsis: Analyze timestamps of either tmpfiles or empty dirs. An
    : aid in debugging the behavior of tmpwatch script.
    : --------------------------------------------------------------------
    : Usage: $ourname [-d] DIRPATHNAME
    : '  -d        Look only at empty dirs instead of files.'
  
    local opt_true=1 opt_char badOpt=
    OPTIND=1
      # OPTIND=1 for 2nd and subsequent getopt invocations; 1 at shell start
  
    local OPT_d= OPT_c=
    while getopts dc: opt_char
    do
       # save info in an "OPT_*" env var.
       [[ $opt_char != \? ]] && eval OPT_${opt_char}="\"\${OPTARG:-$opt_true}\"" ||
         badOpt=1
    done
    shift $(( $OPTIND -1 ))
  
    # If badOpt:  If in function return 1, else exit 1:
    [[ -z $badOpt ]] || { : help; return 1 &>/dev/null || exit 1; }
  
    #unset opt_true opt_char badOpt
  
    (
    [[ $OPT_d == -d ]] && action="-type d -empty" || action="-type f"
    tdir=${1:-/tmp}
    [[ -d $tdir ]] || { echo $FUNCNAME:\[$tdir] not a dir; return 1; }
    tdir=$(cd "$tdir";pwd -P)
      # make tdir "find friendly"
  
    emptydirs=$(find $tdir -type d -empty 2>/dev/null|wc -l)
    echo Total files: \[$(find $tdir -type f 2>/dev/null |wc -l)] in \[$tdir] \
      "         "Dirs: \[$(find $tdir -type d 2>/dev/null|wc -l)] \
      "         "Empty Dirs: \[$emptydirs]
    if [[ $emptydirs = 0 && $action =~ -type\ d\ -empty ]] ;then
      return 1
    fi
  
    echo
    echo "count of files w/[mca] timestamp-age less than 'col 1'-days :"
    echo
    for (( i=1; $i <= ${OPT_c:-15} ;i += 1));do
  
      m=$(find $tdir $action -mtime -$i 2>/dev/null|wc -l)
      c=$(find $tdir $action -ctime -$i 2>/dev/null|wc -l)
      a=$(find $tdir $action -atime -$i 2>/dev/null|wc -l)
  
      printf "i:%3d    m:%6d     c:%6d     a:%6d\n" $i $m $c $a
  
    done |sed -e 's~^~  ~'
    )
  }

--

construct similiar to 'eval'

 $ cmd='set -- a s d ;for f in "$@";do echo $f;done'
 $ source <( echo "$cmd" )      ## Only works in bash 4.x
 a
 s
 d

"disk full" cleanup - bash function to help

Below is a bash function '_diskfull' used to help identify large files to manually delete. The bash function bashfunccodegrep is used to display 'diskfull' from the file "functions":

 /usr/local/etc/team/mke/iBASHrc $ _bashfunccodegrep _diskfull < functions
 _diskfull()
 {
   : _func_ok2unset_ team function
   : Size-sorted output of: cd ARG1 ... du -xSma
   : Safe to run on /, because of -x switch to du, stays in / fs -- this has been tested.
   : -S == do not include size of subdirectories
   : Advantages of -S:
   : .. dirs w/small files only at their top level get low sort rank, top level as in "GNU find's depth 1"
   : .. fewer size sum calculations
   (
   set -eu
   local fs="${1:-$PWD}"
   fs_bn="$(basename "$(canPath "$fs")")"
     : canPath "$fs", could be replaced with: readlink -f "$fs"

   if [[ $fs_bn == / ]] ;then
     fs_bn=ROOT
   fi
   local tmpdir=${TMPDIR:-~/tmp}
   [[ -d $tmpdir ]] || tmpdir=/tmp

   local out="$( mktemp $tmpdir/$FUNCNAME.$fs_bn.$(hostnameshort).XXXXX)"
     du_stderr=$(mktemp $tmpdir/$FUNCNAME.du_stderr.XXXXX)
   sort_stderr=$(mktemp $tmpdir/$FUNCNAME.sort_stderr.XXXXX)

     cd "$fs"
     echo $FUNCNAME: writing to $out
     (
     set -x
     : CWD: $PWD writing to $out
     nice du -xSma 2>$du_stderr|nice sort -T $tmpdir -k1,1rn 2>$sort_stderr
     :
     cat $du_stderr
     cat $sort_stderr
     ) > $out 2>&1
   )
   rm -f $du_stderr $sort_stderr
 }
 

GNU tar to remote tape drive

I run cron scheduled backups to rsync.net, and tape backups - to either DDS4 or LTO tapes.

GNU tar supports tar backup to a tape drive on a remote host.

From GNU tar texinfo help:

 `--rsh-command=CMD'
      Notifies `tar' that is should use CMD to communicate with remote devices.

For example:

tar --rsh-command=/usr/bin/ssh ...

The code below is available in uqjau.

I put together wrapper functions for tar, and mt in a file to be sourced by bash ( uqjau file: "_tape_utils.shinc" ):

  $ _bashfuncgrep _tar < ./_tape_utils.shinc
  _tar()
  {
    # --------------------------------------------------------------------
    # Synopsis: GNU tar wrapper to support remote tape drive
    # --------------------------------------------------------------------
    (set -x;sleep 5;time tar ${_use_ssh+--rsh-command=$_use_ssh} "$@")
      # _use_ssh if defined is path to ssh, typically /usr/bin/ssh
  }

The script I use for backing up a linux host to (remote or local) tape w/tar is called "_backupall", and is also part of uqjau. The bash function '_bashfuncgrep' is in iBASHrc.

Safe way to update or mv a symbolic link.

( applies to GNU: ln, mv, and cp )

Ex of

snafu:

   ~ jdoe $ ls -ldog *
   lrwxrwxrwx 1    2 Mar 20 20:21 latest -> d3/
   lrwxrwxrwx 1    2 Mar 20 20:20 prev -> d1/
   ~ jdoe $ ln -sf d2 prev     # WRONG
   ~ jdoe $ ls -ldog *
   lrwxrwxrwx 1    2 Mar 20 20:21 latest -> d3/
   lrwxrwxrwx 1    2 Mar 20 20:20 prev -> d1/
   ~ jdoe $ ls -ld d1/*
   lrwxrwxrwx 1 2 Mar 20 20:23 d1/d2 -> d2

solution:

   ~ jdoe $ ln -Tsf d2 prev    # RIGHT
   ~ jdoe $ ls -ldog *
   --snip
   lrwxrwxrwx 1    2 Mar 20 20:23 prev -> d2/

Ex: rename existing symbolic link and redefining another existing symbolic link:

   mv -Tf saz yap
     # -T, --no-target-directory == treat DEST as a normal file

   # With out the -T, if yap had been a symbolic link to a dir, then
   # the symbolic link 'saz' would have ended up under that dir.

Regex grep of: all commands in PATH, and bash: aliases, built-ins, keywords, and functions

 _cg()
 { 
   : Regex grep of: all commands in PATH, and bash: aliases, built-ins, keywords, and functions.
   : Usage: $FUNCNAME [REGEX]

   : --http://stackoverflow.com/questions/948008/linux-command-to-list-all-available-commands-and-aliases
   : compgen -c will list all the commands you could run.
   : compgen -a will list all the aliases you could run.
   : compgen -b will list all the built-ins you could run.
   : compgen -k will list all the keywords you could run.
   : compgen -A function will list all the functions you could run.
   : compgen -A function -abck will list all the above in one go.

   local filter
   if [[ $# == 1 ]];then
     filter="| egrep -i '$1'"
   fi
   (set -x; eval "compgen -A function -abck ${filter:-}")
 }

'_cg'. is part of iBASHrc.

Output is not sorted. Example listing all commands, snipped by sed:

 $ _cg 2>&1 |sed -ne 2115,2120p
 pax
 eu-readelf
 nano
 fusermount
 gitk
 xxd

Example grep for "pk.*er":

 $ _cg 'pk.*er'
 + PATH+=:/usr/local/7Rq/scommands/cur
 + eval 'compgen -A function -abck | egrep -i '\''pk.*er'\'''
 ++ compgen -A function -abck
 ++ egrep -i 'pk.*er'
 pklogin_finder
 pkinit-show-cert-guid

vim custom commands to kill or wipe current buffer and switch to previous

I try to stay in a single vim session, typically open for weeks, so the number of buffers can get out of control. Here are a couple of simple housekeeping custom .vimrc commands, that I use all day long:

 command Kb :b#|bdel#
 command KB :b#|bw!#

where 'b#' switches to previous buffer, then bdel# deletes the buffer you were in when you ran this 'Kb' command.

vim function: search for all in list of words, in any order

Just created. Tips for improving gratefully accepted. Thx to 'zapper' for regex.

 function Mfind(...)                                                                                  
   let searchStg=""
   let i = 0
   for stg in a:000
     let searchStg .=    i == 0 ? ".*" . stg : "\\&.*" . stg
     let i += 1
   endfor
   exe "g;" . searchStg
 endfunction

 # ex
   :call Mfind("red","blue","white")

bad block hard drive check: nice dd < /dev/sda > /dev/null

Run this command as root:

dd < /dev/sda > /dev/null

reads all blocks on the entire 'sda' device (ie the first hard drive). Only read errors are displayed -- you should have none. Be very careful when ever /dev/sda shows up on the root commandline!

A crude test, but very simple.

Related:

#bash function to: cd using your single word nickname/shortcuts

'cd_' is a simple bash function to create, manage, and use a directory of symbolic links that point to your favorite directories. I create a wrapper function with a shorter name to call 'cd_'. 'cd_' is part of iBASHrc.

  ex. using my directory shortcut 'zz'

    ~ $ c zz                                   # Where 'c' is alias for 'cd_'.
    /usr/local/7Rq/package/cur/sys-2012.03.25/shar/lib $ 


  cd_()
  {
    : team function _func_ok2unset_  manages directory shortcuts
    : -------------------------------------------------------------------- 
    : Synopsis: cd using favorite single word nicknames, or manage 
    : related symbolic links
    : -------------------------------------------------------------------- 
    : $FUNCNAME                             , "(no args) to list all shortcuts"
    : $FUNCNAME -a          SHORTCUTBASENAME, add sym link for \$PWD
    : $FUNCNAME -a REALPATH SHORTCUTBASENAME, add sym link for REALPATH
    : $FUNCNAME -d SHORTCUTBASENAME         , delete 
    : $FUNCNAME -h                          , show recently created favorites

    local dirs=~/dirs
    mkdir -p ~/dirs
    local hist=$dirs/hist

    local opt_true=1 OPTIND=1
    local OPT_l= OPT_d= OPT_a= OPT_h
    while getopts lad:h opt_char
    do
       # save info in an "OPT_*" env var.
       test "$opt_char" != \? && eval OPT_${opt_char}="\"\${OPTARG:-$opt_true}\"" ||
         return 1 
    done
    shift $(( $OPTIND -1 ))
    unset opt_true opt_char

    if [[ -z $OPT_l && -z $OPT_d && -z $OPT_a && $# = 1 ]];then
      if [[ -L $dirs/$1 ]] ;then
        cd "$dirs/$1"
        return 0
      elif [[ -f $dirs/$1 ]];then
        # $1 is a script that echos the dest dir.
        cd "$(source "$dirs/$1")"
      else
        echo "$FUNCNAME: [$1] not a shortcut" >&2
        return 1
      fi
    elif [[ -n $OPT_a ]];then
      if [[ $# == 2 ]];then
        (set -x;ln -Tsf "$1" "$dirs/$2")    2>&1 |tee -a $hist
        return ${PIPESTATUS[0]}
      elif [[ $# == 1 ]];then
        (set -x;ln -Tsf "$PWD" "$dirs/$1" ) 2>&1 |tee -a $hist
        return ${PIPESTATUS[0]}
      else
        echo "$FUNCNAME:oops:[$*]" >&2
        return 64
      fi
    elif [[ -n $OPT_d ]];then
      (set -x;rm -f "$dirs/$OPT_d")
      return 0
    elif [[ $OPT_l ]];then
      ls -ld $dirs/{*,.[^.]*}
      return 0
    elif [[ -n $OPT_h ]];then
      ( set -x;tail -4 $hist )
      return 0
    elif [[ $# = 0 ]];then
      ( set -x;cd "$dirs";ls -ld * ) 2>&1 |less
      return 0
    else
      echo $FUNCNAME:internal error >&2
      return 1
    fi
  }

Simulate cron env for linux

The environment for cron jobs is minimal.

This is close to the env that cron jobs see:

 $ env -i USER=$USER HOME=~ PATH=/usr/bin:/bin /bin/bash -c set
 BASH=/bin/bash
 BASH_ARGC=()
 BASH_ARGV=()
 BASH_EXECUTION_STRING=set
 BASH_LINENO=()
 BASH_SOURCE=()
 BASH_VERSINFO=([0]="3" [1]="2" [2]="25" [3]="1" [4]="release" [5]="i386-redhat-linux-gnu")
 BASH_VERSION='3.2.25(1)-release'
 DIRSTACK=()
 EUID=--snip
 GROUPS=()
 HOME=--snip
 HOSTNAME=--snip
 HOSTTYPE=i386
 IFS=$' \t\n'
 MACHTYPE=i386-redhat-linux-gnu
 OPTERR=1
 OPTIND=1
 OSTYPE=linux-gnu
 PATH=/usr/bin:/bin
 PPID=3237
 PS4='+ '
 PWD=/var/home/rodmant/tmp
 SHELL=/bin/bash
 SHELLOPTS=braceexpand:hashall:interactive-comments
 SHLVL=1
 TERM=dumb
 --snipped USER and UID
 _=/bin/bash

This one liner is a example of running a script w/args to see if it will run in a sparse env, like a cron job:

 $ env -i USER=$USER HOME=~ PATH=/usr/bin:/bin /bin/bash -c "$_C/argsshow a 'b c'"
 _01:a$
 _02:b c$

Swap in your script and it's args into the double quotes above.

de-dup PATH in ~/.bash_profile

A bash function I wrote for ~/.bash_profile to de-dup $PATH. It requires a bash associative array, so it works only in 4.x bash or later.

 _deDupPATH()
 {
   local path=$1

   if [[ ${BASH_VERSION%%.*} < 4 ]];then
     : Requires at least bash 4.x.
     echo "$path"
     return 0
   fi

   local oIFS="$IFS"
   local p nPATH
   declare -A seen
   local started=""

   IFS=:
   for p in $path;do
     if [[ -n $started ]];then
       if [[ -n ${seen["$p"]:-} ]];then
         continue
       else
         nPATH+=:"$p"
       fi
     else
       started=1
       nPATH="$p"
     fi
     seen["$p"]=1
   done

   IFS="$oIFS"
   unset seen
   echo "$nPATH"
 }

 # ex

   $ _deDupPATH a:a:z
   a:z

'ff' for piping to, saving to, and displaying tempfiles. A handy #bash function I use many times daily. #mktemp

New scratch files are created below ~/tmp/_ff/. A symbolic link ~/tmp/ff.txt is made pointing to the current scratchfile. Old scratch files are not deleted (let cron do that ). I also have vim functions to call 'ff' for reading and writing.

 $ ff --help
 ff: Convenience cut and paste tool. Type, edit, pipe to an
 auto created, unique scratchfile.

 date|ff                        date > $scratchfile # ( new $scratchfile ), pathname of $scratchfile shown on STDERR
 seq 5|ff -t                    seq 5|tee $scratchfile # ( new $scratchfile )
 ff -c                          cat $scratchfile
 ff -w                          show pathname of current $scratchfile
 ff -C COMMENT                  prepend COMMENT to $scratchfile basename
 ff -l                          less $scratchfile
 ff -n                          edit new $scratchfile
 ff -nE                         new $scratchfile, echo pathname
 ff -P                          windows print           (cygwin only)
 ff ~/mystuff                   cp ~/mystuff $scratchfile # ( new $scratchfile )
 ff -e                          ed $scratchfile
 ff -5                          tail -5 $scratchfile
 ff +5                          head -5 $scratchfile
 ff -gc                         clipboard to new $scratchfile         (cygwin only)
 ff -pc                         copy $scratchfile to clipboard    (cygwin only)
 ff -R -- REMOPTS REMARGS       REMOPTS and ARGS are sent to a remote instance of ff
 ff -h HOST -- REMOPTS REMARGS

 ff -r                          use readline; read 1 line from STDIN, write new $scratchfile
 ff                             cat > $scratchfile # reads STDIN from terminal ( new $scratchfile )
 ff >foo                        cat $scratchfile > foo

This bash function is part of uqjau.

'chmod u-x foo/ && rm -rf foo/' # rm fails for Linux non root

'rm -rf foo' fails below, due to 'chmod a-x foo/' :

 $ uname -ro; rpm -qf /bin/rm
 2.6.18-348.6.1.el5 GNU/Linux
 coreutils-5.97-34.el5_8.1
 $ id -u;mkdir foo;chmod a-x foo/;ls -logd foo
 4187
 drw-r--r-- 2 4096 Oct 17 07:46 foo/
 $ rm -rf foo; echo $?
 rm: cannot chdir from `.' to `foo': Permission denied
 1

Pretty sure this is intended behaviour. Last time I was able to check Solaris did not have this "feature".

bash ssh session w/tty, w/o running login startup scripts

Assume you have a corrupt or faulty ~/.bash_profile, which prevents you from logging in. This should position you to login and edit it:

 ssh -t localhost bash --norc -i johndoe@foobar.com
   # -t forces a tty,  --norc else source ~/.bashrc, -i for interactive

Close STDIN. Fail the subshell if it reads STDIN.

 $ echo hi | ( set -e; <&- read foo ; echo notSeen >&2 )
 bash: read: read error: 0: Bad file descriptor

In a pipe get parent tty, and prompt user for input.

 $ :|(TTY=/dev/$(command \ps -o tty= -p $$);exec <$TTY;read -p '> ';echo got: $REPLY)
 > hi
 got: hi

sort and read null delimited STDIN

 $ printf "z\000j\000a"|sort -z |od -c
 0000000   a  \0   j  \0   z  \0
 --snip

 $ printf 'hi\000ho\000'|while read -r -d "" foo ;do echo $foo;done
 hi
 ho

fixing a munged TERMinal session

A bash function "_sa" ( as in "sane" ) using vim, that has been working for me:

 _sa ()
 {
     : --------------------------------------------------------------------
     : Synopsis: reset terminal, terminal reset, sanity reset.
     : Warning: has hardcoded: 'stty sane erase ^H', and depends on vim
     : --------------------------------------------------------------------
     [[ ${OSTYPE:-} = cygwin ]] || {
         reset;
         : in one case reset fixed line-drawing characters snafu
     }
     stty sane erase ^H
     vim +:q  # Has side affect of fixing up terminal.
 }

bash "Command Substitution" [ as in $(command) ] strips NUL chars

 $ printf '\000hi\000' > foo
 $ wc -c foo
 4 foo
 $ echo -n "$(<foo)" | od -c
 0000000   h   i
 0000002

set -- $foo VS set - $foo

See bash 'help set'. Not sure where 'set -- ARGS' is documented.

# compare:

 set -- $ans # vs
 set -  $ans # /1st better when $ans is undefined

example:

 $ echo $BASH_VERSION
 4.1.10(4)-release
 $ set -- -foo
 $ echo $1
 -foo
 $ set -
 $ echo $1
 -foo
 $ set --
 $ echo $1/
 /

'set -e' (errexit) can be ineffective in bash functions

 $ (set -e; foo(){ false; echo hi; }; foo )  # Works ok if in simplist form.
 $ echo $?
 1
 # Three "not safe" examples:
 $ (set -e; foo(){ false; echo hi; }; if foo; then :;fi; ! foo; foo || : ; foo && : )
 hi
 hi
 hi
 hi

Simple statements calling function 'foo' are not a problem, but notice that some compound statements like:

 if foo ...

 ! foo

 foo || :

 foo && :

effectively disable 'set -e' (errexit flag) within function 'foo'.

Consider avoiding a dependency on 'set -e' in your functions.

Related links:

Despite how negative the above threads are I think 'set -e' is still useful.

handy 'bash --login' function, for a fresh env

 $ type -a _login
 _login is a function
 _login ()
 {
     : --------------------------------------------------------------------;
     : Synopsis: Start new bash login shell using 'env -i ...' which minimizes;
     : environment vars picked up by new shell. 'SSH_' related vars for;
     : example will not be inherited. PATH also is fresh.;
     : --------------------------------------------------------------------;
     env -i USER=$USER HOME=$HOME TERM=$TERM $SHELL --login
 }

$* is immune from nounset (set -u)

 $ (: $* is immune from set -u; set -eu;set --; echo "$# [$*]")
 0 []

'set -u' does not apply to unexecuted code

 $ (set -eu;[[ -z $PATH || -n $bar ]]; echo hi )
 -bash: bar: unbound variable
 $ (set -eu;[[ -n $PATH || -n $bar ]]; echo hi )        # short circuit op works, no err for nounset :->
 --snip
 $ ( set -eu; if false;then : $bar;fi;echo hi )
 hi
 $ ( set -eu; if true;then : $bar;fi;echo hi )
 bash: bar: unbound variable
 $

Linux 'ps -p PID...' supports multiple pids

 $ command ps -wwH -o pid,ppid,sess,user,tty,state,bsdstart,args -p 1 4
   PID  PPID  SESS USER     TT       S  START COMMAND
     1     0     1 root     ?        S Feb 22 init [3]
     4     1     1 root     ?        S Feb 22   [watchdog/0]

uqjua SCRIPTS_OVERVIEW

synopsis of the best scripts

uqjau.tar.gz: >200 GPL'd bash scripts;perl scripts; bash functions...

home: http://www.nongnu.org/uqjau/README.html#README

README

uqjau: framework of mostly bash and some perl: scripts, script
functions, and script m4 macros for a range of simple purposes,
divided into "packages" (directories) by category.

It's a personal project; I'm the primary user.  uqjau is a
"grab bag" of tools created over many years, that I install either below
~ or /usr/local on UNIX and cygwin hosts.  

Been shell script writing since 1988; using GNU/Linux since 1996;
still have much to learn.  Constructive review of this code
appreciated. I tried to follow good practices and believe there
are some interesting shell scripting code idioms.


There are almost 200 script or script function files, here are my
current favorites:

  jobmon:
    Wrapper to run and log (STDOUT/STDERR/exit code/duration)
    of another script.  Useful for cron jobs. Logs files self-purged.

  fileutils-misc.shinc:
    bash file related functions for interactive or script use.
    sf(): find files in dirs given, sort by time, default to CWD
    _isonlydirs: return 0 if a tree is nothing but empty subdirs

  bash_common.shinc:
    script initialization, functions, and signal handlers
    to be sourced by bash scripts

  _bg:
    Run simple command in background in separate
    process session.  Will not be seen by your shell as a job.  Log
    STDOUT and STDERR to file. Simple command => exactly 1 command
    and it's args.  Useful when 'at and batch' are not available.

  backupall:
    tar backup of local filesystems, rewind and read each
    tape file. A log is created.  Remote tape drive supported.

  key2agent:
    Loads private ssh key specified to your ssh agent,
    supplying passphrase taken from encrypted file, starting an ssh
    agent if needed.  Uses keychain which gracefully handles case
    where key already loaded.

Design:

- Directory layout and tool configuration was influenced by
  'slashpackage': http://cr.yp.to/slashpackage.html. It has a
  directory named  'commands' that you may place in your
  PATH; containing only symbolic links to scripts.

- No requirement for record on file system or in env vars of
  where install tree is. Scripts figure this out. Does not
  depend on a fixed name for top dir in install tree.

- The goal was to write the scripts so that the 'commands'
  dir does not have to be in the PATH - scripts that are the
  exception, are not located in the uqjau tree, but depend on
  files in it.  Scripts w/in the uqjau tree as a rule should not
  assume that the 'commands' dir is in the PATH.

- After a script determines path to root of the uqjau
  install dir, it may source supporting (dependency) files in
  the uqjau tree.

- Scripts should not depend on pre-existing exported env vars, so cron
  jobs that run scripts should work w/o wrapper scripts.

- Scripts may only be invoked by: their basename ( assuming
  the 'commands' dir is in the PATH ); by the fully qualified
  pathname to the 'commands' or 'scommands' dir; or by
  ./COMMANDBASENAME - only if current working dir is the
  'commands' dir.  This limitation in the last case is due to
  the logic used in most scripts that try to determine the
  install dir for uqjau.

- As a rule bash scripts use 'set -eu', so any failed command
  or undefined env variable aborts the script.  Exit codes
  from these scripts and script functions are generally 
  meaningful; ie 0 => success.

    TBD: 'set -e' usage in my code needs to be reviewed in light of
 
      http://lists.gnu.org/archive/html/bug-bash/2012-12/msg00094.html
      http://lists.gnu.org/archive/html/bug-bash/2012-12/msg00102.html

- As a rule 1st line of scripts is:

    #!/usr/bin/env bash

  So the interpreter (in this case bash), may be installed
  anywhere, as long as it is in the PATH.  Also makes testing
  new versions of bash (for ex) easier.

interactive bash function library; scheme to manage bash login sequence

download: http://trodman.com/pub/iBASHrc.tar.gz

README

A scheme for managing ~/.{bashrc,bash_profile} and other 'rc'
files.  A suite of over 160 day to day sysadmin/general bash
functions, 100+ aliases, and several ~/.applicationrc files;
for interactive use in Linux, and Cygwin. Supports approach for
managing functions, aliases, and env vars on multiple hosts
(selectively sharing code).

Typically, I update the tar archive (content) at least once per week.

The login sequence is broken up into *many* separate files,
that are sourced.  Host specific modifications are
placed in a sub directory named './noshar', so all
else can be shared across hosts.  Run the '_lsq' (login
sequence) bash function, to get an idea of the flow.

It's ugly/messy/a bit fragile, but I use it every day, on several hosts.
For now I suggest you just look it over for ideas.  I have no design
docs, but it is reasonably commented.  Although it should be safe to install on
your primary (non root) account, but it's a major set of changes, so I
suggest you create a new account to test it.

These start up routines have some dependencies w/my GPL'd bash shell scripts:

    http://www.nongnu.org/uqjau/README.html#README

Hope you get some idioms/ideas from the code.

BUGS:
  Some bash functions (and aliases?) are included that will not
  work without uqjau tools installed; and some of the
  tools are very provincial.  I will try to move them out as time
  permits.  'set -e' is enabled for most of login sequence so
  and failing command will abort your login ;easy to change this,
  but be warned!

UNIX cp, touch, mkdir with windows permissions

If you like UNIX cp, 'cp -r', 'mkdir -p, and touch; you have to use windows and you want destination files and dirs w/normal windows permissions...

Take a look at these bash cygwin wrapper scripts:

I use them frequently so the're reasonably mature. The're part of http://trodman.com/blog/#uqjau

_cp is available in $_lib/_cp.shinc; it will be also automatically loaded as a shell function if you install http://trodman.com/pub/iBASHrc.tar.gz

Above approach applies to $_lib/_wtouch.shinc, and $_lib/_wmkdir.shinc.

_cpd is a script which will be in your PATH.

--

CLI tool loads list of your favorite cygwin packages

download: trodman.com/pub/cygwin/cygwin-../%5Fcygii0/%5Fcygwin-pkgs-setup.cmd

This simple tool, may help you get started in cygwin. You can use it for your initial install of cygwin.

It should install the list of cygwin packages I use - it's relatively minimal (~500MB); for old school UNIX sysadmin types, no xwindows.

There is help and comments in the batch file - pls edit as needed.

--

maintaining cygwin install on network drive

Suggest you load it on a local drive on you PC; but on that same PC, you may install a 2nd instance on a network drive, so others that do not want to do the install locally may use it. Also useful when you have multiple hosts that need cygwin, and do not want to maintain many separate cygwin installs.

setup.exe supports installation of more than one instance of cygwin; but you can only run one instance at a time on a given host. You can maintain both the local cygwin install, and the network install on the same PC.

--

My Job Cover Letters

--

"Follow your Passion"

Instead, mimic habits, and self train to mirror successful people you admire (google talk by author Cal Newport "So Good They Can't Ignore You" ): <http://www.youtube.com/watch?v=qwOdU02SE0w>

--

Job Search Help

--

for my personal search:

--

Posting Your Resume

WI
USA

--

Questions to ask Recruiters

--

Work w/Marty Nemko Podcast

<http://www.martynemko.com/radio-show>

--

Mon 21 Sep 2009

DDS Tape Drives - Observations and Best Practices

At home, after writing backup data to DDS tape, I rewind, and read each tape file to validate the tape.

If that tape drive fails, I want to be able to recover using a different drive.

I have several drives, can they read
each others tapes? SCSI DDS tape drives have always been dicey, delicate devices for me. Yesterday, I ran some preliminary tests. My DDS3 drive can read DDS2 tapes created by other drives. As a rule, it appears, my DDS2 drives can read each other's tapes.

Unfortunately my DDS2 drives can not read any DDS2 tape that was written to by my DDS3 tape drive. I have checked the density of the writes with 'mt -f /dev/st0 stat', and DDS2 is reported. Needless to say I won't be creating any more DDS2 tape backups w/my DDS3 drive. After each backup, is done I update the paper label on the tape with the date, and the name of the specific tape drive that wrote it.

--
Ever have a DDS tape refuse to eject? Take the steps needed so you can see the outside of the drive itself from the side. On one side, there should be a hole in the metal drive cover, for a small screw driver. Around this hole should be a CW or CCW arrow, to suggest which way to rotate the internal screw head. It is geared down, so it takes many turns to eject the tape - as you turn, be careful, to untangle the tape media, if the drive has 'eaten the tape'.

Wed 26 Aug 2009

--

nice and backgrounding a ~/.bashrc function

Say you have a bash function defined as part of your login sequence, so it is available to your interactive shell. You want to run this function for a job you know will take hours, so you'd like to background it and run at a lower priority.

In the last example below I want to report diffs on my "day plan" (a text file 'dp') over a 6 year period; 'dp' was cron/auto checked in to RCS revision control each work day.

methods that fail, the third suggests the solution:

  ~ $ bash -c '(set -x; alias pgrep; type _history)'
  + alias pgrep
  bash: line 1: alias: pgrep: not found
  + type _history
  bash: line 1: type: _history: not found
  ~ $ bash -lc '(set -x; alias pgrep; type _history)'
  + alias pgrep
  bash: line 1: alias: pgrep: not found
  + type _history
  bash: line 1: type: _history: not found
  ~ $ bash -lic '(set -x; alias pgrep; type _history)'
  + alias pgrep
  alias pgrep='pgrep -l -f'
  + type _history
  _history is a function
  _history ()
  {
      case $BASH_VERSION in
          [12]*)
              history "$@"
          ;;
          *)
              HISTTIMEFORMAT="%x %X " history "$@"
          ;;
      esac
  }
  ~ $

So in my real world case, here is the solution:

  nice bash -lic 'hrcs -t "6 years ago" /tmp/dp' &>/var/tmp/foo &
    # 'hrcs' (show RCS history) is bash function I define for my
    # login sessions; so above job runs in background at lower
    # priority. see 'man bash': ( -l ==> login; -i ==> interactive)

--
hint at a simplier approach, that fails:

  ~ $ foo() { echo hi; }
  ~ $ foo
  hi
  ~ $ nice foo
  nice: foo: No such file or directory