bug-bash
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

some self written docs too, asking for input on em


From: Alex fxmbsw7 Ratchev
Subject: some self written docs too, asking for input on em
Date: Wed, 24 Feb 2021 01:19:13 +0100

i wrote, partly some ago, some docs about bash, awk, and debian related
linux, .. i wish once some bugs work instead of are bugs continue writing
docs, well i had long years a big code archie page of just me not much
others, and am totally for public domain open source, everything not this
is not legal goodness :))

so, i wrote docs, plz output me something them please :))
they are in separate doc.<topic>[.subtopic] format, and in this mail
inckuded in p lain and also in the total archive
and also the mail includes a flat text of all with some file informations
detailed, so it can be used as a data stream, oh well

please no offense, i just try to work out

update: google doesnt allow my docs.tgz, so in the content of the mail is
only the flat 2d text and each file of, with exception of one file that
google antiv irus found suspicous but its just bash cmd $( < file )
updated: removed that file and tried sending archive again - seemed to work
so.. hope
--
++ paste BEGIN ++

++ paste BEGINFILE doc.order �

doc.order
doc.author doc.intro doc.license
doc.paste.splitback doc.pasteix.script doc.paste.cmd
doc.common-args doc.-- doc.unix
doc.gnu doc.posix doc.ansi
doc.debian doc.gnu-linux
doc.deb.apt
doc.linux doc.linux.users doc.linux.permissions
doc.regex
doc.bash
 doc.bash.lang doc.bash.quoting doc.bash.cmds doc.bash.redirections
 doc.bash.exec
 doc.bash.var-setment doc.bash.env-vars doc.bash.declare doc.bash.arrays
doc.bash.assoc-arrs
 doc.bash.var-expansion doc.bash.brace-expansion
 doc.bash.escaping doc.bash.aliases
 doc.bash.printf doc.bash.glob
 doc.bash.read doc.bash.mapfile
 doc.bash.loops
doc.awk doc.gawk
 doc.awk.lang doc.awk.vars
 doc.awk.records doc.awk.fields doc.awk.types doc.gawk.switch doc.awk.if
doc.awk.while doc.awk.print doc.awk.printf doc.awk.sub doc.awk.gsub
doc.gawk.gensub
doc.foobar
doc.man doc.grep doc.printf doc.ls doc.mv doc.cp doc.ps doc.find


++ paste ENDFILE doc.order �( 23 lined 64 worded 853 chared )

++ paste BEGINFILE doc.author �

i, alex[ander] [dimitrov] [popov kirov] ratchev, am 35, m, living in rotten
parts of swiss
have a life as some nicknames such as Fucked MicrosuxX Bot, {xmb}, ixz,
x7y, ikhxcsz7y on the inet
doing public domain open source only
peace


++ paste ENDFILE doc.author �( 4 lined 40 worded 229 chared )

++ paste BEGINFILE doc.intro �

this should represent a [in prelimitary stage] howto from null to 100 in
debian bash and g/awk coding
it contains
 $ commands_to_run_on_the_shell
inline comments and further learning text
 <direct code or requirement here>
 [optional piece of data supplied by the user or info writer]
 ... further stuff

peace / x7y


++ paste ENDFILE doc.intro �( 9 lined 50 worded 308 chared )

++ paste BEGINFILE doc.license �

me and this all falls under:
 public domain open source
more specific
 free only good not enemish public domain <purpose> texts

harming wont result it no concequencies


++ paste ENDFILE doc.license �( 6 lined 27 worded 163 chared )

++ paste BEGINFILE doc.paste.splitback �

#!/usr/bin/gawk -f

BEGIN {
 if ( pre == "" ) pre = "/tmp/"

 b = "^   \\+\\+ BEGINFILE "
 e = "^   -- ENDFILE "

 OFS = ORS = ""
}

$0 ~ b , $0 ~ e {
 if ( $0 ~ b ) {
  file = pre substr( $0 , index( $0 , "FILE" ) + 5 )
  gsub( /\.\.\//, "", file )
  print file "\n"
  getline
 } else if ( $0 ~ e ) {
  print substr( sav, 1, length( sav ) -1 ) >file
  close( file )
  ++filed
  sav = ""
 } else
  sav = sav $0 RT
}

END { print "did split " filed " files back \n" }


++ paste ENDFILE doc.paste.splitback �( 27 lined 115 worded 440 chared )

++ paste BEGINFILE doc.pasteix.script �

#!/bin/bash

case "$d" in
ix*) curl -F 'f:1=<-' ix.io ;;
*) cat >>"${o:=/tmp/${RANDOM:0:3}${RANDOM:0:3}}"
 printf '%s\n' "$o" ;;
esac < <(
 gawk -v ORS= -e '
BEGINFILE {
 if ( ARGC > 1 )
  print "   ++ BEGINFILE " FILENAME "\n\n"
}
ENDFILE {
 if ( ARGC > 1 ) {
  print "\n   -- ENDFILE " FILENAME "   -=| " c " chars " w " words " FNR "
lines\n"

  tc += c
  tw += w
  tl += FNR
  t++
  w = c = 0
 }
}
END {
 if ( ARGC > 1 )
  print "   -- END of " t " pastes ( " tc " chars " tw " words " tl "
lines\n"
}
{
 print $0 RT
 fflush()

 w += NF
 c += length( $0 RT )
}
  ' "$@"
 )


++ paste ENDFILE doc.pasteix.script �( 36 lined 136 worded 543 chared )

++ paste BEGINFILE doc.paste.cmd �

bash doc.pasteix.script $( < doc.order )


++ paste ENDFILE doc.paste.cmd �( 1 lined 6 worded 40 chared )

++ paste BEGINFILE doc.common-args �

there is one [imho only halfway good just as other standards] arguments
style that in unix systems command typing got thru, besides no parsing
which is not this
this is the
 -s -w -itch --es
before
 data\main args
separated when needed [or so] by --

so its:
 ./app -m 1 afile
may mean mode 1 over afile for app
or
 ./app --mode 1 afile

if afile contains a dash as first char, it would be threatened as switch
option to parse\act, for this
 ./app -m 1 -- -afile
or
 ./app -m 1 ./-afile


++ paste ENDFILE doc.common-args �( 17 lined 93 worded 470 chared )

++ paste BEGINFILE doc.-- �

'--' is a marking in argument parsing to separate -switch --options from
data\main args


++ paste ENDFILE doc.-- �( 1 lined 14 worded 87 chared )

++ paste BEGINFILE doc.unix �

i think unix is the overterm for some style of commands driven system..
linux is unix, minix is unix, darwin is unix, ..

on the unix command layer, a success is exit code 0, there may be max 255
status afik not sure, minus arent, so 1 is already error two too etc, only
0 is success ( failed not .. )
eg
 grep non_existing_match
on not-seen-this-match will exit non-0 cause it didnt find results


++ paste ENDFILE doc.unix �( 6 lined 75 worded 391 chared )

++ paste BEGINFILE doc.gnu �

gnu is not much i know about, some overall public domain open source
standards or something


++ paste ENDFILE doc.gnu �( 1 lined 16 worded 91 chared )

++ paste BEGINFILE doc.posix �

posix is also not much i know about, old outdated incomplete coding rules
and such

additional information:
 https://pubs.opengroup.org/onlinepubs/9699919799/
 https://shellhaters.org/


++ paste ENDFILE doc.posix �( 5 lined 19 worded 180 chared )

++ paste BEGINFILE doc.ansi �

ansi is not i know much, standard in text-output or something


++ paste ENDFILE doc.ansi �( 1 lined 11 worded 61 chared )

++ paste BEGINFILE doc.debian �

debian.org is a public domain open source distribution of the linux kernel
and gnu chainset

to install, download the to your cpu-arch corresponding iso, make it to usb
or the like, and boot off it
dont forget to have space for at least one ext4 formatted then partition
for its system and user data, that is the linux kernel, some of many gnu
and non gnu apps respecivley 'bins' for binary executables

debian has its roots since very somewhen beginning of linux distros, and
has many sub distributions based on its packaging system and or package
building system and or binaries then .deb'd

its installed management is base named 'dpkg' , i guess debian package ? =p
then it has an upperer front named apt or and apt-get and similiars
and a light weight informational thingy named apt-cache

 # apt-get update # download newest package data to local
 # apt-cache search gnu bash # to find bash* containing results
 # apt-get install bash gawk socat # -t experimental # for having the base
 ..
 # dpkg -i a_package.deb # to install a local-having .deb file
 # apt-get remove a_package # to remove that package via apt # --purge # to
remove also more harder residental files
 # dpkg -r a_package # to remove it more directly via dpkg
 # dpkg --purge left_overs # to remove resitental left overs of deinstalled
packages

there are config files of which i dont know much due to non-known syntax, i
cannot learn a shit i rather code up better
one of many is /etc/apt/apt.conf for main configs of which i dunno
other is /etc/apt/sources.list that is for defining the sources apt gets
its data from
my sources.list is since nearly beginning of me with debian potatoe
2*something contains also beta and such sources, officially by debian tho

about experimental, can be usually used with apt-get -t experimental, if
you dist-upgrade, be sure to first verify package removements, as perl
updates and such may break xorg and such cases, for those you dont
dist-upgrade you upgrade [-t eperimental] then add restly packages that
dont fatally affect your system manually via looking at apt-get's change
output of 'held back packages to upgrade', then list the wanted packages
after upgrade or install command as each word_is_package_name

 deb http://deb.debian.org/debian experimental main contrib non-free
 deb http://deb.debian.org/debian unstable main contrib non-free
 deb http://deb.debian.org/debian testing main contrib non-free
 deb http://deb.debian.org/debian stable main contrib non-free
 # current release specific optional entry
 deb http://deb.debian.org/debian bullseye main contrib non-free
 deb http://security.debian.org/debian-security bullseye-security main
contrib non-free
 deb-src http://security.debian.org/debian-security bullseye-security main
contrib non-free


++ paste ENDFILE doc.debian �( 35 lined 427 worded 2744 chared )

++ paste BEGINFILE doc.gnu-linux �

as gnu/linux usually the linux kernel and the gnu extended binary apps
chainset is specified


++ paste ENDFILE doc.gnu-linux �( 1 lined 15 worded 92 chared )

++ paste BEGINFILE doc.deb.apt �

 apt / apt-get / apt-cache

are the baseline utils in debian to manage masses of packages on the fly
 apt-cache search content
 apt-get install packages
 apt-get remove packages
 apt-get upadte
 apt-get upgrade \ dist-upgrade

 -t experimental


++ paste ENDFILE doc.deb.apt �( 10 lined 36 worded 234 chared )

++ paste BEGINFILE doc.linux �

linux, means at least two things, first its the name of the kernel running
on a computer, and second by it supported apps, eg gnu-linux-debian apps

the linux kernel has serval configurement chances
i know only of a few cause its not my region of doing, .c

first of, kernel.org is the public address to get the newest kernel
about git beta kernels, i didnt have luck yet, ..

then that kernel archive will contain a big Documentation dir which also
coverts partly some boot codes, and also other docs as kernel configurement
information and stuff
the kernel contains many possible build options of which most *whoever*
only few enable and such, like outdated choisements and
'security-enforcing-dont-update-to-newest' nonsense

then, the kernel has some runtime information via variables
 # sysctl -a
to view em all
 # sysctl -w var=val
to set one

to build a kernel, you must have gcc or the like and make or the like
installed, as well as ncurses or readline or whatever something maybe maybe
not

in kernel dir, run
 make menuconfig
to configure lots of hours stuffs that will make the boot non working ( but
hey it might not )

then i only know the debian way of building it cause i needed before always
only the .deb to run

linux kernel vars can also be passed by boot params, either when you boot
manually into your boot-cmdline, or usually pre-written from config file
debians default boot loader is grub which is acceptable
i just edit /etc/default/grub there add or remove stuffs
then run
 # update-grub
that generates me valid, updated /boot/grub/grub.cfg entries


++ paste ENDFILE doc.linux �( 31 lined 269 worded 1545 chared )

++ paste BEGINFILE doc.linux.users �

linux is also multi user enabled ( as long as not disabled at config or so )
the main user, user id 0, user name 'root', is the admin account everyone
needs

i was teached when i began linux as every other guide to always use non
root user for everything, like 'personal user', however i think this is big
bullcrap, so.. go for root

then, debians first user at usb install or so created has uid 1000 and is
named by your choise
then, users also contain groups, that.. group users together

 chown
cmd is used for changing, ownerships, ..
 chown root file
 chown root:root file


++ paste ENDFILE doc.linux.users �( 12 lined 107 worded 566 chared )

++ paste BEGINFILE doc.linux.permissions �

linux has lots of permission stuff of i dunno of, like in memory hooks for
before / after function
however, file permissions i know a bit of

permissions go from 0 to 7
 0 no permission
 1 exec
 2 write
 3 exec+write
 4 read
 5 read+exec
 6 read+write
 7 read+write+exec

that for 3 sections: 1) everyone 2) group(me) 3) user(me)

with chmod you change modes on files
 chmod 446 dir
changes dir's mode to 446
 chmod 557 file
changes file's mode to 557

there is a forth thing, 1446 or something, i dunno of, sorry

to be able to enter a directory, the +x exec 1/5/7 must be set


++ paste ENDFILE doc.linux.permissions �( 24 lined 109 worded 554 chared )

++ paste BEGINFILE doc.regex �

regex, aka regular expressions, is a char stack up data block match
specification language

every char in a sequence must match, where also charsets with ranges and
wildcards exist

 abc
literally 'abc'
 .
any character
 *
null or more last-match-block
 +
one or more
 [abc]
a or b or c
 [^abc]
not any of a b c
sometimes it can also be written as
 [!abc]

 [a-z]
a to z ( numerical char value till higher end class together )
 [a-zABC]
a to z or A or B or C
 (group match)
groups, for data grouping ( non dup code ) or in memory reuse-content-later
groups
 {from} or {from,} or maybe {,from} or {from,to}
specified count of instances of last char_to_parse

 a*b+c.[eE]
null or more a
one or more b
one c
any char
e or E

 abc ~ ...
  -> abc
 abc ~ cba
  -> <nothing>



++ paste ENDFILE doc.regex �( 40 lined 146 worded 729 chared )

++ paste BEGINFILE doc.bash �

gnu bash is one of many commands-to-run interpreter
like microsuxx's winblows cmd.exe is the same style interpreter competly
different input format
so with other cmd-shells, like zsh

it reads commands | keyword constructs and does actions by that, that
either from keyboard / stdin or dedicated scriptings


++ paste ENDFILE doc.bash �( 5 lined 47 worded 302 chared )

++ paste BEGINFILE doc.bash.lang �

the bash coding language is quite simple to posix shell syntax with spaces
mainly as word separator and ; or newlines between commands, single or
double quotes to bind stuff between that separator ( IFS ( input field
separator ) ), or escaped ( 'next-thing-not-special' escapation ) with a
backslash ( \ )

 this_is_a_cmds_name this_is_arg1 'this is arg2' these were args 3 to 8
 "cmd with spaces"
 cmd\ with\ spaces

 var=val cmd
 var=val\ n\ other cmd
 var='val n other' cmd
 var="var n other" cmd

single quotes are to preserve more data without special interpretation
the only backdraw is, you cant have single quotes inside the data, for
this, you end the ' then \' manually a single quote then ' open again the
single quotes for further data, of course if its the end of the data
element you dont open new ones as its the end
double quotes interpret a few things, like \ escaped " double quotes, shell
cmds with ` cmd ` or the more proper style of this $( cmd ) which returns
cmds stdout to the cmdline instead of the terminal, and prolly other things

 == var'n other
 var=var\'n\ other
 var="var'n other"
 var='var'\''n other'
 var=var\'n' other'

quotes can appear anywhere where bash parses cmds and are meant to bind
stuff to one data argument piece

to make bash interpret char escape sequences, such as newline or ansi
terminal codes, put it in between $' and '

 var=$'this is \e[1mbold\e[m partly\n'

to include a single quote in this, you just escape it
 var=$'this\'includes\'single\'quotes\'without\'spaces'


++ paste ENDFILE doc.bash.lang �( 29 lined 257 worded 1497 chared )

++ paste BEGINFILE doc.bash.quoting �

quotes are used to tell the system to interpret normally separated data (
eg by spaces ) as one argument instead of split

there are few types of quotes

 from " to "  double quotes for lightweight conceration, inside you can
start commands in subshells with ` or $( to ), which also inside reset the
" to none
 from ' to '  single quotes for more static text without interpretation, to
use ' inside ', simply sadly end the ', put a literal \', and begin the
single quotes ' again .. for further text ..
 only \<next_char>
 from $' to '  is for formatted text, \n and so interpretation\translation
 from $" to "  is for translations, i dunno, sorry


++ paste ENDFILE doc.bash.quoting �( 9 lined 121 worded 640 chared )

++ paste BEGINFILE doc.bash.cmds �

bash cmds consist of lots of parts of existance
 [vars=values] cmd [args] [;\n]
[redirections] are possible anywhere in the cmdline and dont get
text-passed to the app but get redirected, eg 2>&1 for stderr to stdout, or
> >( tee -a filter.out < <( awk '/filter/' ) ) for some other crapp or so

where [..] is optional, cmd is not
 at least in loops and some other constructs you have to have one command (
like the empty ':' command if you didnt overwrite it already )

commands are read and executed sequencional to inputting via newline and
processing speed
commands may be separated by ';' or newline ( or end-of-file )
commands that begin with optional whitespace and then a '#' are marked rest
of line non-exec-comment

if you specify a var=val on the cmd-containing line before cmd, they only
count for that cmds runtime, not further setting vars so

the args to commands are optional and regarding the purpose and
coded-already in commands may need or not be set
args are mostly used as switches between app functions or data parts like
filename or even data themselfes

-- redirections -- to stdin/stdout/stderr respectivly other fd numbers or
files are possible via the > and < signs

 > filename
overwrites filename with new content
 >> filename
appends to filename rather than overwriting it

 > >( cmd )
for output to cmds input
 < <( cmd )
for in current-cmd cmds output

-- command combining --
is called pipe'ing commands to other commands, via the '|' vertical line
sign

 cmd1 | cmd2
pipes cmd1's stdout output into cmd2 stdin
 cmd3 | cmd4 | cmd5
...


++ paste ENDFILE doc.bash.cmds �( 35 lined 276 worded 1534 chared )

++ paste BEGINFILE doc.bash.redirections �

bash redirections to files and fd's are specified as command extension via
the < or > signs without or with other features

as command extension, a command must be run, for the redirection to be
active
that has two sides, at least
one is
 $ cmd >out 2>out.2
redirects cmd's stdout to out and stderr to out.2
while then that rule only remains valid for the one cmd line specified there
 $ exec >out 2>out.2
does redirect both globally for the bash session in run

 $ exec <in >out
 $ app &
redirs current script | interactive stdin from in and stdout to out
then runs app in background
means app's stdin is from file ./in and goes to ./out

there are also commands spawning abilities
 >( cmd )
for output-to command
returns a /dev/fd/<number>
to use, usually prepend another =redir-sign=
eg
 > >( cmd )
tells in others words, 'output to =/dev/fd/addr/connection/of/cmd='

 <( cmd )
 < <( cmd )
^^^^^^^^^^^^^^^
same with input

old style :
 ps aux | awk '/app/ { print $2 }'

new style:
 pgrep app
 awk '/app/ { print $2 }' < <( ps aux )


++ paste ENDFILE doc.bash.redirections �( 37 lined 191 worded 1000 chared )

++ paste BEGINFILE doc.bash.exec �

exec is the shell keyword for a few small important things
with it you can make redirections global, and it is used to 'run command
with/inside the same env as the script'
which means, when the command exits the whole script/shell exits, .. im not
sure about how that is affected while spawning, i have observed serval
speed ups placing exec in & background spawned cmds

 $ exec -a a_name my_cmd
to spawn my_cmd with a_name's app name, which would appear in ps/tree
 -c for empty environment
 and -l for login like append a dash ( '-' ) at the end of the cmd

 exec 2>&1 >outf
or the like to make redirections global, in this case, stderr to stdout,
and then stdout to file 'outf'
ran then afterwards commands will have this set


++ paste ENDFILE doc.bash.exec �( 12 lined 134 worded 718 chared )

++ paste BEGINFILE doc.bash.var-setment �

 varname=content in general, separated by spaces or the command separator ;
i advise to stack up the more setments without command separator for more
instant speed, also the code looks prettier and is easier to think

 var=cont var2=$var$var var3="$var var"

see var-expansion to many possibilities of inline-using and modding strings


++ paste ENDFILE doc.bash.var-setment �( 6 lined 50 worded 329 chared )

++ paste BEGINFILE doc.bash.env-vars �

environment variables are those vars which are most inherit instead of not
they get inherit by something with something is run, and that, if not
extra-ly modified, passes em further
defaultly, variables are not exported to the environment
there are a couple of cmds that make em exported

 declare [-g] -x var[=content]
 export var[=content]

to pass var content to children processes, like as config values for stuff
you spawn
an example is LIBGL_INDIRECT=1 or something, PATH also

exported variables are a good way to share data further, an efficient too,
..,

env vars can not be arrays, only normal one-string objects


++ paste ENDFILE doc.bash.env-vars �( 14 lined 101 worded 609 chared )

++ paste BEGINFILE doc.bash.declare �

declare is the bash builtin code used to define more special vars such as
named references and associative arrays

 declare
alone will print all available vars set separated by newline, escaped by
single quote whenever needed
 declare var=value
will declare var=value as local
cause without -g for global it will do so
local is, the var is to the function currently in ( or initially global if
in main code ) scoped, not parent, but inherit by childs of code

 declare -gx var=val
to declare var=val as global and exported, for children processes

 -a means array
 -A means associative array
 -n means named reference, meaning the content set will be interpreted by
$var later usage as other var to get and set data from

 declare -n foo=bar ; bar=yes ; : $foo is yes ; foo=yes yes=no bar=maybe ;
: $foo and $bar is maybe


++ paste ENDFILE doc.bash.declare �( 17 lined 145 worded 805 chared )

++ paste BEGINFILE doc.bash.arrays �

arrays are basically two dimensional string varnames
this means, they are not one-string-vars like var=foo
they can have var=foo var[1]=next var[2]=other

associative arrays, those are arrays with an element name of non numeric
also, like for instant string checkment, must be declared at least
initially via
 declare -A varname
 declare -gA varname=( [one]=foo [two]=bar )
 declare -A var ; var[foo]=bar ; printf %s\\n "${var[foo]}"
see 'assoc-arrs' for more on those assoc arrays

indexed, the ones with numeric index/element[s], you usually dont have to
be declared extraly big, -a there for this

see 'declare' for some specials

 arr=( abc def ) arr+=( "abc" )
  -> arr = ( abd def abc )


++ paste ENDFILE doc.bash.arrays �( 16 lined 111 worded 677 chared )

++ paste BEGINFILE doc.bash.assoc-arrs �

associative arrays are array vars that can contain any kind of data as key,
instead of only positive numbers

 declare -A varname ; varname[one]=1 varname=( [two]=2 ["three"]=3
['..']=more )
the assignment can be done inline via declare the same syntax way just
remove the ';' command separator for this

assoc arrays have to be defined at least in the beginning of so-usage -A
otherwise they are not assoc key'd


++ paste ENDFILE doc.bash.assoc-arrs �( 6 lined 69 worded 407 chared )

++ paste BEGINFILE doc.bash.var-expansion �

variable names, names that when prefixed with the dollar sign expand to
their set content

 var=abc ; printf %s\\n "$var"
sets varname 'var' to content after the equal sign '=' ( 'abc' in this case
)
then ';' to separate commands, cause without it the var would get set only
for current command instead of more global
then printf the var to stdout for the user to see, with a newline added (
\\n for '\n' or "\\n" too .. )

to separate var names like myname1myname2 use $var inside { that is:
 ${myname1}$myname2
i dropped it on the second cause no need useless type

some more advanced stuff inside ${ possible follows

 ${#abc}
return length of the content of the var named 'abc'
 ${#arr[*]} ${#arr[@]}
return length of elements in array

 ${var#str}
cuts str from the beginning of var
 ${var##str}
cuts str as many times as appearining from the beginning of var

 ${var%str} ${var%%str}
same as # and ## just valid for end of var content instead of beginning

 ${var/str}
replaces str in var to empty , only once
as
 ${var/str/}

 ${var/str/rep}
replaces str in $var with rep , only once
 ${var//str/rep}
replaces str to rep as many times as they appear

 ${var//str}
removes str outta var as many times as found

 in the from-str definition of // # for beginning and % for end of string
can be used




++ paste ENDFILE doc.bash.var-expansion �( 42 lined 231 worded 1264 chared )

++ paste BEGINFILE doc.bash.brace-expansion �

brace expansion {word,second,third} then evaluates to three pieces of
args/text, 'word' 'second' and 'third'
it can be used tight in one string to multi pre and stu data, however if
multiple args are given this looses its function for
{just-words-expanded-instead-of-with-prefix-and-stuffix}


++ paste ENDFILE doc.bash.brace-expansion �( 2 lined 40 worded 290 chared )

++ paste BEGINFILE doc.bash.escaping �

escaping 'special' stuff, aka making it appear as normal text arguments
instead of specially interpreted sequence, can be done in a couple of ways
first of all by backslash <nextchar> is escaped
 \\
results in command line parsing in one slash
in double quotes too, in single quotes however as they were done to not
interpret, they'd appear both
in $' interpreting it would appear as one, however any \ appearing not
followed by an interpreted sequence would appear

 $ ./example this\ is\ one\ filename.text
 $ ./sqample select\ \*\ in\ foo

quotes are also used by cmdline parsing and some other places ( and some
not ) to escape

" to " double quotes escape spaces, .., ` and $( command exec works inside,
$var ${expansion} also
' to ' single quotes escape nearly everything, all excepts themselfes, for
them to appear you end ', make another \' appear, then begin again ' for
further
` to ` and $( to ) command executement dont escape, is where usually you
need to escape, they open a cmd interpreter just as reading commands from
you-type-it interactive, they just dont read, they await the command
already written
$' to ' string interpretment escapes only non-sequence following \ (
backslashes ), \\ appears as one \

escape sequences have here and there some meanings, some commons include:
 \12
decimal number as-one-char interpretment
 \x2e
hex interpretment
 \063
for octal
 \e
for escape key, which can also be like \33 \x1b \033 .. not the backslash
escape char meant here
 \r
for carridge return, the 'point back to first char of current line'
sign\char
 \n
for newline

there are at least two big ways to escape data inline to bash, for further
special parsing
remember that when you're trying to use quotes or escapes batchly to
generate further commands, you almost have to spawn another parser instance
to make bash interpret the quotes \and specials at all, _or_ use eval shell
keyword to make current shell interpret the quotes, _or_ define and use
aliases with the new code, ..or.. make current shell source the code lines,
eg . <( printf %s "$( </dev/fd/3 )" ) 3<<'eo' \n code here \n eo .. eo for
end-of short
 ${var@Q}
makes bash expand var's content as escaped by and with $' to ' around
printf's %q escapes it by intelligent standards, the less special data the
less special quotes or such are made
 printf %q "$var"
 printf -v %q "$var"
 printf -v complex_command './already %q\n' "$var"

code evaling safely, who can can, may try as well, begins at knowing that
the string ( command ) to eval must look exactly as wanted as would be
written in readline in the current executement
can be very tricky, to escape all the subquotes and stuff
can stack ` as well, with mad fail results

i see eval'ing as half solution to supporting more deep interpreted
subroutines, like microcodes or right done recursive aliases
 eval arr=( "$( ls --quoting=shell )" )
 eval arr=( "$( printf %q\\n  * )" )

 declare -p arr
to display arr's content a coder-friendly's way



++ paste ENDFILE doc.bash.escaping �( 51 lined 518 worded 2932 chared )

++ paste BEGINFILE doc.bash.aliases �

aliases ( for runtime code ) mirror themselfes as defined keywords that
then when bash is interpreting commands can or not mean something special (
its set code containing content )

you define aliases to shorten and not write duplicate code
a side thingy is they are faster than function calling instead
on thats aside, they can be tricky to manage to work

aliases get resolved on command name parsing step of shell command
processment to their set content
they should be written as same-looking-as when you'd-write-em-manually
in-that-runtime condition

 alias foo=printf
 foo foo2 foo3
  -> foo

 alias foo=printf\ %s
 foo foo2 foo3
  -> foo2 foo3

if they end with a space, following is not marked as normal arguments but
further commands alias parsing

 alias foo=printf %s\  name='a b c'
 foo name
  -> abc

( to add a space instead of none in the operator, see printf's fmt
formatter first arg, include a space at the end of it ( the %s here ) - it
leaves a trailing space tho .. )

to support arguments ( which it does not officially, just set before usage
vars or arrays ready for usage in-alias
optionally when needed start the command with a newline or ; or
_nonexisiting_variable_name= ; .. to make the var definitions get inherit
by further commands, not that they end up in only-for-the-cmd usage instead
of global

aliases are fast and neat for var assignment code stackment without
external commands


++ paste ENDFILE doc.bash.aliases �( 29 lined 241 worded 1388 chared )

++ paste BEGINFILE doc.bash.printf �

printf is a multi defined print-data-in-pairs together as one command to
stdout if not specified otherwise like with -v
varname_to_set_instead_of_to_stdout

printfs syntax includes many single-letter %<here> format specifiers to be
used as first data arg to printf to define the format the latter set text
arguments get printed

 printf %s foo bar
  -> foobar ( no ending newline either )
 printf %s\\n foo bar
 printf %s'\n' foo bar
 printf '%s\n' foo bar
 printf "%s\\n" foo bar
 printf $'%s\n' foo bar
  -> foo(\n)
  -> bar(\n)
 printf %s\  foo bar
  -> foo bar ( no ending newline )
 printf %s\  foo bar ; printf \\n
  -> foo bar(\n ending newline)

 printf -v varname %s foo bar
  -> sets varname var to set processed, 'foobar'

format specifiers include
 %s
for normal mixed strings
 %b
for at expand time interpreted strings, eg can include \n for newline
 %q
for \ escaped strings
 %(date-%<formats>)T
for time formatted, data arguments of printf apply then as date reference,
or -1 for current and -2 for something similiar
eg
 %(%T+%F%z)T
will result in a complete human readable timespec

see man 3 printf for detailes of different format specifiers


++ paste ENDFILE doc.bash.printf �( 35 lined 190 worded 1126 chared )

++ paste BEGINFILE doc.bash.glob �

globs are called patterns that describe how file names ( and strings in [[
) get matched
see extglobs for bash extended globs that can do more

 * means everything
if used between it means like everything-noend [till or from specified
other piece of data]
 [abc0123]
 [a-z] are one-character-but-many-of-em matchings
 [$'\100'-$'\200']
 [!a-z] to invert


++ paste ENDFILE doc.bash.glob �( 9 lined 54 worded 345 chared )

++ paste BEGINFILE doc.bash.read �

read is the bash builtin used to read ( and with loops per ) line(s) in

synposis : read [switches] [varname(s)]
if varname is omitted REPLY is used to read into
if multiple varnames are given, it splits its chunk read by $IFS into them
switches include
 -r
dont make it try to interpret \backslash sequences
 -d <char>
setting the line delimiter
if '' ( an empty string ) is given, it returns at the \0 char
 -n <number>
read so much of chars
 -e
use readline, making arrow keys and such available
 -p <text>
prepend that text before reading, appearing like cmdline or question for
example
 -t <number>
timeout in seconds
 -s
quiet, silent, dont echo the inputted chars, that seems tty only, .. from
file redirection it didnt ever echo anything ive seen
 -a <array varname instead of other var names>
assign the fields would been split into array of its argument name


++ paste ENDFILE doc.bash.read �( 23 lined 155 worded 846 chared )

++ paste BEGINFILE doc.bash.mapfile �

mapfile to read data ( for example redirected from files or commands ) into
an array ( last argument to mapfile be its name to fill from element 0 on )
by one separator ( -d ) ( -t to remove it in the vars )

 -d <char>
delimiter, separator
 -t
do not include matched separator in the array data content
 -O <number>
start array filling at that number instead of 0
 -u <number>
read from fd instead of stdin
 -n <number>
read only so many lines, 0 means all
 -s <number>
suppress so many beginning lines before filling
 -C <function_name>
call this function to process records
 -c <number>
call function every number times of read


++ paste ENDFILE doc.bash.mapfile �( 18 lined 116 worded 613 chared )

++ paste BEGINFILE doc.bash.loops �

looping code, eg executing stuff multiple times tho typing it only once,
can be done in a few ways

for <varname> in <arguments> ; do <commands> ; done
 for var in a b\ c 'd e' $'e\nf' ; do printf %s\ \  "$var" ; done
runs code per arg element

while <command(s)> ; do <code> ; done
 while j=( $( jobs -p ) ) ; (( jc = ${#j[*]} > 0 )) ; :
code_to_act_on_jobs_running ; done
runs code while end command is true ( aka exit code 0 )

until <command[s]> ; do <code> ; done
 until command_to_succeed ; do desired_action ; done
runs the code until command exits 0, the reverse of the while loop


++ paste ENDFILE doc.bash.loops �( 13 lined 117 worded 576 chared )

++ paste BEGINFILE doc.awk �

awk is a lightweight data processing / modification coding language

it separates mainly data by two signs
the record separator newline by default
and field separator spaces and such by default

it can be used for fast data extractment | statistics | modification

awk's binary app usage is
[some optional vars = values] awk [optional switches] [-e if gawk]'code'
[optional files to process]

the code element is required, the shortest is like '1' for 'yes so print
every line'
if there is no code specified, and no -f file-of-code is specified, awk
gives an error
without gawk's -e 'code_block' you can only have one code arg specified,
and after this all [nearly] is threaten as input-file
if there is no input files specified, awk thinks its input off stdin


++ paste ENDFILE doc.awk �( 15 lined 129 worded 746 chared )

++ paste BEGINFILE doc.gawk �

gawk is the gnu awk, with the important gnu extensions and features


++ paste ENDFILE doc.gawk �( 1 lined 12 worded 67 chared )

++ paste BEGINFILE doc.awk.lang �

i mostly cover the gawk area of awk

code consists of blocks ( of code ) and statements ( the code, commands )
commands are separated by ; or newlines, or { or } the code blocks
commands containing newlines, including quoted strings, need that escaped
as \ as their last char on the line

 BEGIN { FS = "\t" }
set var FS ( field separator ) to a tab at beginning of script

 /anymatch/
print whole line where anymatch text gets matched as regex
 /anymatch/ { print $3 }
print field 3 of the anymatch lines

 BEGINFILE { print "new file:", FILENAME }
prints that and the filename every new file began
 ENDFILE { print "bye to file:", FILENAME }
prints the filename every end of file

 END { print "bye" }
prints 'bye' at end

i know, for more useful stuff you just read on, somewhere they will be

there is conditional short if-then-else shortcut in form of
 ( cond ) ? then : else
 awk 'BEGIN { a = ( cond ) ? "b" : "c" }'
 awk 'BEGIN { b = ( ( another ) ? mixture() : of() ) ( mix() ) }
 function mixture() { } function of() { } function mix() { }'

by now shortened to
 var var2 ENVIRON [ "var3" ] ? new = 1 : ""
 ors ? ORS = ors : ENVIRON [ "ors" ] ? ORS = ENVIRON [ "ors" ] : ""
 ! ors in _empty ? ORS = ors : ""

is not compatible with print \ printf and return statements inside the
action


++ paste ENDFILE doc.awk.lang �( 36 lined 269 worded 1260 chared )

++ paste BEGINFILE doc.awk.vars �

vars get set simply by
 varname = "string"
 varname = other_var "other_inbween" afunc()
are wild examples
to use the var, simply reference it by its varname set

vars prefix by the $ dollar sign are then, their content interpreted as
field number for returning that fields numbers content

in other words
 var = "abc"
 var = var "abc"
 var = "abc" var
 two = 2
 two += 1
 print var, $two
with input anything containing a third field, it will print "abcabcabc" and
the current third field

well there is better, more advanced and clearly cleanly shorter
differential asserting style i just now discovered

 [ ! ] condition ? strs : strs
 var = [ ! ] condition ? str : str

it means as much as if condition\var is true ( non null, not empty )
evaluate text block after ? otherwise after :

 ( def != "" ) ? "" : def = ENVIRON [ "def" ]
 ( def != "" ) ? "" : def = "more static default

to have a mix of such, you put the assignments in question into ( .. ) next
to other

 new = ( a ? a : b ) "." ( c ? c : d ) "\n"

..to make them and other parts such as the dot and the newline isolated
from expression parsing


++ paste ENDFILE doc.awk.vars �( 32 lined 226 worded 1079 chared )

++ paste BEGINFILE doc.awk.records �

a record, in awk, is like a line entry read, separated by the RS ( record
separator ) value ( by default a newline )
if RS is empty awk behaves like RS is two newlines
every record then awk runs its specified code over

 $ awk -v RS= '{ print $1 }'
senseless code that runs 'print first field' on every double newline or on
eof

with gawk, the matched RS ( useful regex or so usage ) can be reused via
the RT value
 # gawk -v RS='[0-9]+' -v ORS= -e '{ print $0 RT }' # 1:1 data passment
with variable RS record separator regex


++ paste ENDFILE doc.awk.records �( 9 lined 108 worded 518 chared )

++ paste BEGINFILE doc.awk.fields �

fields are the second str/regex separated thing awk splits data to variables
by default a space, with the special meaning [some_whitespace_chars], can
be anything, also a regex

then the data per line gets split by the field separator ( FS , -F arg )
into $number of fields ( NF )

to access, simply prepend the dollar sign infront of any number, like $1
for first 'word' or $( NF -1 ) for secondlast

as a side note, only the default space whitespace separating FS makes
initial matched FS away
eg
 '   a   b' == $1 = a and $2 = b
 ",,,a,,,b" with FS = "," $4 is a and $7 is b
 ",,,a,,,b" with FS = ",+" ( one or more commas ) $1 is empty and $2 is a

 $ awk -F,+ '{ print $2 }' <<<',,,a,,,b'
  -> a

 $ awk -F\\t '{ print $1, $NF, $( NF -1 ) }' <<<$'a\tb\tc\td\te'
 -> a e d


++ paste ENDFILE doc.awk.fields �( 18 lined 159 worded 759 chared )

++ paste BEGINFILE doc.awk.types �

in general, every var content is \0 null byte safe and is a string
internally, code functions handle stuff few differently
a string can be used as number ( as long as its numeric, or can be
converted via strtonum( ) gawk extension )
or as regex

 print $1 + strtonum( "-2234" ) +1
use str'ed number converter, good for when extracting \ migrating with data

 $0 ~ "regex"
 $0 ~ /regex/
 /regex/
success if $0 ( whole input line / record ) contains regex, the text
literarly, check the regex docs on regexes

 var = "abc" " " "def"
 var = 123
 var = "123"
as said, only one data type

 printf "%s\n", $1
 printf "%f\n", $1
prints $1 as string or float number

to the types there are arrays, which is just any multiple dimension string
arrays, for data structures or whatever

 arr [ 0 ] = "null"
 arr [ 1 ] = "one"
 arr [ 2 ] [ 2 ] = "22"
 arr [ 2 , 2 ] = "22"
the comma gets then replaced by awk parsing with the SUBSEP ( defaultly \34
) to separate array elements
both, [2][2] and [2,2] are the same

in functions predefenitions, there are no types, only variable names
its up to you to use em so they dont error the few awk functions =)


++ paste ENDFILE doc.awk.types �( 33 lined 225 worded 1106 chared )

++ paste BEGINFILE doc.gawk.switch �

switch is gawk'ism used to stack up code for data string conditions with
only one big check, unlike if else if that rereuns

----

switch ( var ) {
 case "string" :
 case /regex_same_code/ :
  code
  break
 default : nomatch_code_here ; break
}

seems a variable instead of "string"|/regex/ is not supported


++ paste ENDFILE doc.gawk.switch �( 13 lined 52 worded 295 chared )

++ paste BEGINFILE doc.awk.if �

if the return code of condition is not null and not empty, run specified
code
code commands can be combined via { .. }

if ( condition )
 onecode
[else if ( condition )
 onecode]
[else
 onecode]

----

if ( cond ) { cmd ; cmd } else other

----

if ( cond ) { cmd1 ; cmd2 } else if ( other ) { cmd3 ; cmd4 } else { cmd5 ;
cmd6 }


++ paste ENDFILE doc.awk.if �( 17 lined 75 worded 312 chared )

++ paste BEGINFILE doc.awk.while �

 while ( condition ) [{] code [}]
runs code while condition is evaluated to not null\empty

 while ( ++i <= 20 ) print "not 21 yet"


++ paste ENDFILE doc.awk.while �( 4 lined 26 worded 128 chared )

++ paste BEGINFILE doc.awk.print �

awk has two facilities to output data straight, print and printf, this
covers the print one
print simply prints its afterwards as arguments specified datas to stdout,
or somewhere else
it replaces subelements with OFS ( output field separator ) and on the end
it prints ORS ( output record separator )

 print "a", "b"
  -> a b\n


++ paste ENDFILE doc.awk.print �( 6 lined 58 worded 324 chared )

++ paste BEGINFILE doc.awk.printf �

awk has a print and a printf for data out, this covers the printf a bit

printf stands for print-format
its first arg must be a format specifier for further args processed
which are optional, but you may want a single string without special vars
that be then the format to print

 {
  if ( NR != 1 ) printf "  "
  printf "%s\ts", $2, $4
 } END { printf "\n" }
<<<$'a b c d\ne f g h'
  -> b\td  f\th


++ paste ENDFILE doc.awk.printf �( 12 lined 82 worded 387 chared )

++ paste BEGINFILE doc.awk.sub �

sub( <from-str\regex> , <to-regex-compatible-specifier> [ , var ] )
is a function to replace strings with another, only once that is, for
as-many-as-appear, the same as gsub()
returned is 1 if the substitution did occour, or 0 if not


++ paste ENDFILE doc.awk.sub �( 3 lined 39 worded 231 chared )

++ paste BEGINFILE doc.awk.gsub �

gsub( <from-regex> , <to-fmt> [ , var ] ) is like sub but replaces from
with to as many times as they appear, not just once ( first instance )
returned is the number of times the replacement did occour

 gsub( /a.*z../, "" )
 c = 3 ; gsub( "a" , "b" , $c ) ; print $c
<<<'abb bba aba'
  -> bbb


++ paste ENDFILE doc.awk.gsub �( 7 lined 63 worded 287 chared )

++ paste BEGINFILE doc.gawk.gensub �

gensub is a gawk extension to allow more complex substitudements where
regex groups can be reused
returned is the resulting modified string, no in-var replacement, only
manually

 begin = "this is my string"
 print gensub( /(.*)my(.*)/ , "\\1>your<\\2" , "g" , begin )
  -> this is >your< string
\\ instead of \ cause the implentation, its argument gotta recueve '\1' and
in double quotes this is \\1
"g" stands for all matches, not just otherwise numeric specified


++ paste ENDFILE doc.gawk.gensub �( 8 lined 77 worded 458 chared )

++ paste BEGINFILE doc.foobar �

foo and bar are common terms for 'just place in here'


++ paste ENDFILE doc.foobar �( 1 lined 11 worded 53 chared )

++ paste BEGINFILE doc.man �

man pages [manuals to apps and informational documents] available on unix
systems, optionally addiotioned to more, can be retrieved via:
 man [section_number] command_name

usually you can like
 $ man bash
 /<text to search bash man for>
 n
 n
to skip results to next
 20n
20 n's ( next search results ) further




++ paste ENDFILE doc.man �( 13 lined 52 worded 301 chared )

++ paste BEGINFILE doc.grep �

grep is used to match out data out of data, eg lines containing 'abc' out
of bigger data which may contain such


++ paste ENDFILE doc.grep �( 1 lined 22 worded 111 chared )

++ paste BEGINFILE doc.printf �

printf is a tool to replicate data specific with a format

first argument is the format, or static string to parse
second and later are data pieces it should act on

as format
 %s
means later-specified string
 %d
is decimal number
 %f is float

and more, see man printf for more complete detailment
many shells, including bash, have a more powerful printf builtin


++ paste ENDFILE doc.printf �( 14 lined 63 worded 350 chared )

++ paste BEGINFILE doc.ls �

ls is an util to [l]ist [f]iles
 -c
for most recent first
 -d
for show dir instead of go list inside
 -l
for more detailed listing
 --quoting=shell
for ' escaped filenames where needed, add a -always for always quote
 -q
for doublequotes

when dealing with commands and files, use straight shell globs instead of
try to parse ls output


++ paste ENDFILE doc.ls �( 13 lined 59 worded 323 chared )

++ paste BEGINFILE doc.mv �

mv is used to move one file to another name
 -v shows from what to what got moved
 -f dont prompt or such before overwriting, just do so [ force ]
 -- to end switches mode parse for making dash beginning filenames available


++ paste ENDFILE doc.mv �( 4 lined 43 worded 220 chared )

++ paste BEGINFILE doc.cp �

with cp you copy files / dirs and such

 -r
recursive, support dirs, not doing -r with a dir doesnt work

 -p
preserve modes and time

.. much more ... man cp


++ paste ENDFILE doc.cp �( 9 lined 32 worded 150 chared )

++ paste BEGINFILE doc.ps �

ps is one old way of showing processes running

 ps aux
to see full system list, not just current user or session
 ps p <pid>
to list only that pid

.. man ps


++ paste ENDFILE doc.ps �( 8 lined 33 worded 151 chared )

++ paste BEGINFILE doc.find �

find is used to output string matched elements of the file system, eg files
and dirs

there are very lots of option switches that find supports
some common are..
 between ( and )
groups statements
 -name <glob as string ( one argument , not expanded by shell to the actual
filenames .. ) >
show matches of this string only
 -a
for and along with other statements \ groupings
 -o
for other such
 -print0
separate output records by \0 byte not \n newline

in scripts, you'd
 while read -r -d '' r ; do <code with $r> ; done < <( find .. .. -print0 )
IFS= optional


++ paste ENDFILE doc.find �( 18 lined 108 worded 544 chared )

++ paste END ++ 64 files 993 lines 6962 words 38378 chars

Attachment: docs.paste
Description: Binary data

Attachment: doc.author
Description: Binary data

Attachment: doc.awk.fields
Description: Binary data

Attachment: doc.awk
Description: application/awk

Attachment: doc.ansi
Description: Binary data

Attachment: doc.--
Description: Binary data

Attachment: doc.awk.if
Description: Binary data

Attachment: doc.awk.print
Description: Binary data

Attachment: doc.awk.gsub
Description: Binary data

Attachment: doc.awk.lang
Description: Binary data

Attachment: doc.awk.printf
Description: Binary data

Attachment: doc.awk.records
Description: Binary data

Attachment: doc.awk.sub
Description: Text Data

Attachment: doc.awk.vars
Description: Binary data

Attachment: doc.awk.types
Description: Binary data

Attachment: doc.awk.while
Description: Binary data

Attachment: doc.bash.aliases
Description: Binary data

Attachment: doc.bash
Description: Binary data

Attachment: doc.bash.arrays
Description: Binary data

Attachment: doc.bash.brace-expansion
Description: Binary data

Attachment: doc.bash.assoc-arrs
Description: Binary data

Attachment: doc.bash.cmds
Description: Binary data

Attachment: doc.bash.declare
Description: Binary data

Attachment: doc.bash.escaping
Description: Binary data

Attachment: doc.bash.env-vars
Description: Binary data

Attachment: doc.bash.exec
Description: Binary data

Attachment: doc.bash.glob
Description: Binary data

Attachment: doc.bash.lang
Description: Binary data

Attachment: doc.bash.mapfile
Description: Binary data

Attachment: doc.bash.loops
Description: Binary data

Attachment: doc.bash.printf
Description: Binary data

Attachment: doc.bash.quoting
Description: Binary data

Attachment: doc.bash.read
Description: Binary data

Attachment: doc.bash.redirections
Description: Binary data

Attachment: doc.bash.var-setment
Description: Binary data

Attachment: doc.bash.var-expansion
Description: Binary data

Attachment: doc.common-args
Description: Binary data

Attachment: doc.cp
Description: Binary data

Attachment: doc.deb.apt
Description: Binary data

Attachment: doc.debian
Description: Binary data

Attachment: doc.find
Description: Binary data

Attachment: doc.foobar
Description: Binary data

Attachment: doc.gawk
Description: Binary data

Attachment: doc.gawk.gensub
Description: Binary data

Attachment: doc.gawk.switch
Description: Binary data

Attachment: doc.gnu
Description: Binary data

Attachment: doc.grep
Description: Binary data

Attachment: doc.gnu-linux
Description: Binary data

Attachment: doc.intro
Description: Binary data

Attachment: doc.license
Description: Binary data

Attachment: doc.linux.permissions
Description: Binary data

Attachment: doc.linux
Description: Binary data

Attachment: doc.linux.users
Description: Binary data

Attachment: doc.ls
Description: Binary data

Attachment: doc.man
Description: Unix manual page

Attachment: doc.mv
Description: Binary data

Attachment: doc.order
Description: Binary data

Attachment: doc.paste.splitback
Description: Binary data

Attachment: doc.paste0x0.script
Description: Binary data

Attachment: doc.posix
Description: Binary data

Attachment: doc.pasteix.script
Description: Binary data

Attachment: doc.printf
Description: Binary data

Attachment: doc.ps
Description: PostScript document

Attachment: doc.sample.awk.dates
Description: Binary data

Attachment: doc.sample.awk.ixzog
Description: Binary data

Attachment: doc.regex
Description: Binary data

Attachment: doc.sample.awk.ixzwk
Description: Binary data

Attachment: doc.sample.bash.cmdbench
Description: Binary data

Attachment: doc.sample.bash.shnee
Description: Binary data

Attachment: doc.unix
Description: Binary data

Attachment: doc.sample.bash.ffogg
Description: Binary data

Attachment: docs.tgz
Description: application/compressed-tar


reply via email to

[Prev in Thread] Current Thread [Next in Thread]