gnuastro-commits
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[gnuastro-commits] master 536b056 110/113: Imported recent changes in ma


From: Mohammad Akhlaghi
Subject: [gnuastro-commits] master 536b056 110/113: Imported recent changes in master, conflicts in book fixed
Date: Fri, 16 Apr 2021 10:34:03 -0400 (EDT)

branch: master
commit 536b056847e648cf9dbb67d01b135c86025c00a0
Merge: d0d8d20 df23eac
Author: Mohammad Akhlaghi <mohammad@akhlaghi.org>
Commit: Mohammad Akhlaghi <mohammad@akhlaghi.org>

    Imported recent changes in master, conflicts in book fixed
    
    Several conflicts came up because of the new one-sentence-per-line
    convention in the book but were easy to fix.
---
 NEWS                         |    69 +-
 bin/TEMPLATE/Makefile.am     |     6 +-
 bin/TEMPLATE/ui.c            |     1 +
 bin/arithmetic/Makefile.am   |     7 +-
 bin/arithmetic/args.h        |    13 +
 bin/arithmetic/arithmetic.c  |     3 +-
 bin/arithmetic/main.h        |     1 +
 bin/arithmetic/ui.c          |     1 +
 bin/arithmetic/ui.h          |     3 +-
 bin/buildprog/Makefile.am    |     6 +-
 bin/buildprog/ui.c           |     1 +
 bin/convertt/Makefile.am     |     6 +-
 bin/convertt/color.c         |    67 +-
 bin/convertt/ui.c            |     1 +
 bin/convolve/Makefile.am     |     6 +-
 bin/cosmiccal/Makefile.am    |     6 +-
 bin/cosmiccal/args.h         |    15 +
 bin/cosmiccal/cosmiccal.c    |     4 +
 bin/cosmiccal/main.h         |     1 +
 bin/cosmiccal/ui.c           |    61 +-
 bin/cosmiccal/ui.h           |     3 +-
 bin/crop/Makefile.am         |     6 +-
 bin/crop/ui.c                |     2 +
 bin/fits/Makefile.am         |     6 +-
 bin/fits/ui.c                |     1 +
 bin/gnuastro.conf            |     2 +-
 bin/match/Makefile.am        |     6 +-
 bin/match/ui.c               |     1 +
 bin/mkcatalog/Makefile.am    |     6 +-
 bin/mknoise/Makefile.am      |     6 +-
 bin/mknoise/ui.c             |     2 +
 bin/mkprof/Makefile.am       |     6 +-
 bin/mkprof/ui.c              |     1 +
 bin/noisechisel/Makefile.am  |     6 +-
 bin/noisechisel/detection.c  |    10 +-
 bin/noisechisel/ui.c         |     1 +
 bin/segment/Makefile.am      |     6 +-
 bin/segment/ui.c             |    13 +-
 bin/statistics/Makefile.am   |     6 +-
 bin/table/Makefile.am        |     6 +-
 bin/table/args.h             |    50 +-
 bin/table/main.h             |    26 +-
 bin/table/table.c            |   266 +-
 bin/table/ui.c               |   332 +-
 bin/table/ui.h               |    15 +-
 bin/warp/Makefile.am         |     6 +-
 configure.ac                 |   149 +-
 doc/announce-acknowledge.txt |    17 +-
 doc/genauthors               |    18 +-
 doc/gnuastro.en.html         |     8 +-
 doc/gnuastro.fr.html         |     8 +-
 doc/gnuastro.texi            | 22421 +++++++++++++++--------------------------
 doc/release-checklist.txt    |    16 +-
 lib/label.c                  |     9 +-
 lib/options.c                |    40 +-
 lib/statistics.c             |    16 +-
 56 files changed, 8747 insertions(+), 15024 deletions(-)

diff --git a/NEWS b/NEWS
index 9756ca1..14c5897 100644
--- a/NEWS
+++ b/NEWS
@@ -7,6 +7,45 @@ See the end of the file for license conditions.
 
 ** New features
 
+  Arithmetic:
+   --onedonstdout: when the output is one-dimensional, print the values on
+     the standard output, not into a file.
+
+  CosmicCalculator:
+   --lineatz: return the observed wavelength of a line if it was emitted at
+     the redshift given to CosmicCalculator. You can either use known line
+     names, or directly give a number as any emitted line's wavelength.
+
+  Table:
+   --equal: Output only rows that have a value equal to the given value in
+     the given column. For example `--equal=ID,2,4,5' will select only rows
+     that have a value of 2, 4 and 5 in the `ID' column.
+   --notequal: Output only rows that have a different value compared to the
+     values given to this option in the given column.
+
+** Removed features
+
+** Changed features
+
+** Bugs fixed
+  bug #56736: CosmicCalculator crash when a single value given to --obsline.
+  bug #56747: ConvertType's SLS colormap has black pixels which should be 
orange.
+  bug #56754: Wrong sigma clipping output when many values are equal.
+
+
+
+
+
+* Noteworthy changes in release 0.10 (library 8.0.0) (2019-08-03) [stable]
+
+** New features
+
+  Installation:
+   - With the the following options at configure time, its possible to
+     build Gnuastro without the optional libraries (even if they are
+     present on the host system): `--without-libjpeg', `--without-libtiff',
+     `--without-libgit2'.
+
   All programs:
    - When an array is memory-mapped to non-volatile space (like the
      HDD/SSD), a warning/message is printed that shows the file name and
@@ -113,25 +152,15 @@ See the end of the file for license conditions.
    - gal_statistics_outlier_flat_cfp: Improved implementation with new API.
    - New `quietmmap' argument added to the following functions (as the
      argument following `minmapsize'). For more, see the description above
-     of the new similarly named option to all programs.
-       - gal_array_read
-       - gal_array_read_to_type
-       - gal_array_read_one_ch
-       - gal_array_read_one_ch_to_type
-       - gal_data_alloc
-       - gal_data_initialize
-       - gal_fits_img_read
-       - gal_fits_img_read_to_type
-       - gal_fits_img_read_kernel
-       - gal_fits_tab_read
-       - gal_jpeg_read
-       - gal_label_indexs
-       - gal_list_data_add_alloc
-       - gal_match_coordinates
-       - gal_pointer_allocate_mmap
-       - gal_table_read
-       - gal_tiff_read
-       - gal_txt_image_read
+     of the new similarly named option to all programs: `gal_array_read'
+     `gal_array_read_to_type', `gal_array_read_one_ch',
+     `gal_array_read_one_ch_to_type', `gal_data_alloc',
+     `gal_data_initialize', `gal_fits_img_read',
+     `gal_fits_img_read_to_type', `gal_fits_img_read_kernel',
+     `gal_fits_tab_read', `gal_jpeg_read', `gal_label_indexs',
+     `gal_list_data_add_alloc', `gal_match_coordinates',
+     `gal_pointer_allocate_mmap', `gal_table_read', `gal_tiff_read' and
+     `gal_txt_image_read'
 
   Book:
    - The two larger tutorials ("General program usage tutorial", and
@@ -155,6 +184,8 @@ See the end of the file for license conditions.
   bug #56635: Update tutorial 3 with bug-fixed NoiseChisel.
   bug #56662: Converting -R to -Wl,-R causes a crash in configure on macOS.
   bug #56671: Bad sorting with asttable if nan is present.
+  bug #56709: Segment crash when input has blanks, but labels don't.
+  bug #56710: NoiseChisel sometimes not including blank values in output.
 
 
 
diff --git a/bin/TEMPLATE/Makefile.am b/bin/TEMPLATE/Makefile.am
index bab4ad9..ee006da 100644
--- a/bin/TEMPLATE/Makefile.am
+++ b/bin/TEMPLATE/Makefile.am
@@ -23,6 +23,9 @@
 AM_LDFLAGS = -L\$(top_builddir)/lib
 AM_CPPFLAGS = -I\$(top_srcdir)/bootstrapped/lib -I\$(top_srcdir)/lib
 
+if COND_NORPATH
+  MAYBE_NORPATH = $(CONFIG_LDADD)
+endif
 
 
 ## Program definition (name, linking, sources and headers)
@@ -33,7 +36,8 @@ bin_PROGRAMS = astTEMPLATE
 ## don't keep external variables (needed in Argp) after the first link. So
 ## the `libgnu' (that is indirectly linked through `libgnuastro') can't see
 ## those variables. We thus need to explicitly link with `libgnu' first.
-astTEMPLATE_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro
+astTEMPLATE_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro \
+                    $(MAYBE_NORPATH)
 
 astTEMPLATE_SOURCES = main.c ui.c TEMPLATE.c
 
diff --git a/bin/TEMPLATE/ui.c b/bin/TEMPLATE/ui.c
index 7d888b7..717bf17 100644
--- a/bin/TEMPLATE/ui.c
+++ b/bin/TEMPLATE/ui.c
@@ -100,6 +100,7 @@ ui_initialize_options(struct TEMPLATEparams *p,
 
 
   /* Set the necessary common parameters structure. */
+  cp->program_struct     = p;
   cp->poptions           = program_options;
   cp->program_name       = PROGRAM_NAME;
   cp->program_exec       = PROGRAM_EXEC;
diff --git a/bin/arithmetic/Makefile.am b/bin/arithmetic/Makefile.am
index 6d562d8..22ccfd2 100644
--- a/bin/arithmetic/Makefile.am
+++ b/bin/arithmetic/Makefile.am
@@ -23,13 +23,18 @@
 AM_LDFLAGS = -L\$(top_builddir)/lib
 AM_CPPFLAGS = -I\$(top_srcdir)/bootstrapped/lib -I\$(top_srcdir)/lib
 
+if COND_NORPATH
+  MAYBE_NORPATH = $(CONFIG_LDADD)
+endif
+
 
 
 ## Program definition (name, linking, sources and headers)
 bin_PROGRAMS = astarithmetic
 
 ## Reason for linking with `libgnu' described in `bin/TEMPLATE/Makefile.am'.
-astarithmetic_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro
+astarithmetic_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la \
+                      -lgnuastro $(MAYBE_NORPATH)
 
 astarithmetic_SOURCES = main.c ui.c arithmetic.c operands.c
 
diff --git a/bin/arithmetic/args.h b/bin/arithmetic/args.h
index 987148d..788a818 100644
--- a/bin/arithmetic/args.h
+++ b/bin/arithmetic/args.h
@@ -85,6 +85,19 @@ struct argp_option program_options[] =
       GAL_OPTIONS_NOT_MANDATORY,
       GAL_OPTIONS_NOT_SET
     },
+    {
+      "onedonstdout",
+      UI_KEY_ONEDONSTDOUT,
+      0,
+      0,
+      "Write 1D output on stdout, not in a table.",
+      GAL_OPTIONS_GROUP_OUTPUT,
+      &p->onedonstdout,
+      GAL_OPTIONS_NO_ARG_TYPE,
+      GAL_OPTIONS_RANGE_0_OR_1,
+      GAL_OPTIONS_NOT_MANDATORY,
+      GAL_OPTIONS_NOT_SET
+    },
 
     {0}
   };
diff --git a/bin/arithmetic/arithmetic.c b/bin/arithmetic/arithmetic.c
index 2a370c5..6935036 100644
--- a/bin/arithmetic/arithmetic.c
+++ b/bin/arithmetic/arithmetic.c
@@ -1247,7 +1247,8 @@ reversepolish(struct arithmeticparams *p)
          will be freed while freeing `data'. */
       data->wcs=p->refdata.wcs;
       if(data->ndim==1 && p->onedasimage==0)
-        gal_table_write(data, NULL, p->cp.tableformat, p->cp.output,
+        gal_table_write(data, NULL, p->cp.tableformat,
+                        p->onedonstdout ? NULL : p->cp.output,
                         "ARITHMETIC", 0);
       else
         gal_fits_img_write(data, p->cp.output, NULL, PROGRAM_NAME);
diff --git a/bin/arithmetic/main.h b/bin/arithmetic/main.h
index 92c9e9f..af1b9a0 100644
--- a/bin/arithmetic/main.h
+++ b/bin/arithmetic/main.h
@@ -81,6 +81,7 @@ struct arithmeticparams
   gal_data_t       refdata;  /* Container for information of the data.  */
   char          *globalhdu;  /* Single HDU for all inputs.              */
   uint8_t      onedasimage;  /* Write 1D outputs as an image not table. */
+  uint8_t     onedonstdout;  /* Write 1D outputs on stdout, not table.  */
   gal_data_t        *named;  /* List containing variables.              */
   size_t      tokencounter;  /* Counter for finding place in tokens.    */
 
diff --git a/bin/arithmetic/ui.c b/bin/arithmetic/ui.c
index f1e39e5..95f23aa 100644
--- a/bin/arithmetic/ui.c
+++ b/bin/arithmetic/ui.c
@@ -120,6 +120,7 @@ ui_initialize_options(struct arithmeticparams *p,
   struct gal_options_common_params *cp=&p->cp;
 
   /* Set the necessary common parameters structure. */
+  cp->program_struct     = p;
   cp->poptions           = program_options;
   cp->program_name       = PROGRAM_NAME;
   cp->program_exec       = PROGRAM_EXEC;
diff --git a/bin/arithmetic/ui.h b/bin/arithmetic/ui.h
index 37bf97d..6bbe04c 100644
--- a/bin/arithmetic/ui.h
+++ b/bin/arithmetic/ui.h
@@ -32,7 +32,7 @@ along with Gnuastro. If not, see 
<http://www.gnu.org/licenses/>.
 
 /* Available letters for short options:
 
-   a b c d e f i j k l m n p r s t u v x y z
+   a b c d e f i j k l m n p r t u v x y z
    A B C E G H J L Q R X Y
 */
 enum option_keys_enum
@@ -40,6 +40,7 @@ enum option_keys_enum
   /* With short-option version. */
   UI_KEY_GLOBALHDU       = 'g',
   UI_KEY_ONEDASIMAGE     = 'O',
+  UI_KEY_ONEDONSTDOUT    = 's',
   UI_KEY_WCSFILE         = 'w',
   UI_KEY_WCSHDU          = 'W',
 
diff --git a/bin/buildprog/Makefile.am b/bin/buildprog/Makefile.am
index fa4f1c5..e8ea17d 100644
--- a/bin/buildprog/Makefile.am
+++ b/bin/buildprog/Makefile.am
@@ -29,13 +29,17 @@ AM_CPPFLAGS = -I\$(top_srcdir)/bootstrapped/lib 
-I\$(top_srcdir)/lib \
               -DLIBDIR=\"$(libdir)\" -DINCLUDEDIR=\"$(includedir)\"      \
               -DEXEEXT=\"$(EXEEXT)\"
 
+if COND_NORPATH
+  MAYBE_NORPATH = $(CONFIG_LDADD)
+endif
 
 
 ## Program definition (name, linking, sources and headers)
 bin_PROGRAMS = astbuildprog
 
 ## Reason for linking with `libgnu' described in `bin/TEMPLATE/Makefile.am'.
-astbuildprog_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro
+astbuildprog_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro \
+                     $(MAYBE_NORPATH)
 
 # Basic program sources.
 astbuildprog_SOURCES = main.c ui.c buildprog.c
diff --git a/bin/buildprog/ui.c b/bin/buildprog/ui.c
index 9d51f20..6db0d66 100644
--- a/bin/buildprog/ui.c
+++ b/bin/buildprog/ui.c
@@ -104,6 +104,7 @@ ui_initialize_options(struct buildprogparams *p,
 
 
   /* Set the necessary common parameters structure. */
+  cp->program_struct     = p;
   cp->poptions           = program_options;
   cp->program_name       = PROGRAM_NAME;
   cp->program_exec       = PROGRAM_EXEC;
diff --git a/bin/convertt/Makefile.am b/bin/convertt/Makefile.am
index c5f6e76..1935ff1 100644
--- a/bin/convertt/Makefile.am
+++ b/bin/convertt/Makefile.am
@@ -23,13 +23,17 @@
 AM_LDFLAGS = -L\$(top_builddir)/lib
 AM_CPPFLAGS = -I\$(top_srcdir)/bootstrapped/lib -I\$(top_srcdir)/lib
 
+if COND_NORPATH
+  MAYBE_NORPATH = $(CONFIG_LDADD)
+endif
 
 
 ## Program definition (name, linking, sources and headers)
 bin_PROGRAMS = astconvertt
 
 ## Reason for linking with `libgnu' described in `bin/TEMPLATE/Makefile.am'.
-astconvertt_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro
+astconvertt_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro \
+                    $(MAYBE_NORPATH)
 
 astconvertt_SOURCES = main.c ui.c convertt.c color.c
 
diff --git a/bin/convertt/color.c b/bin/convertt/color.c
index bdd8daa..fc92a62 100644
--- a/bin/convertt/color.c
+++ b/bin/convertt/color.c
@@ -367,39 +367,40 @@ color_from_mono_sls(struct converttparams *p)
           case 156: *r=1.000000; *g=0.394800; *b=0.000000; break;
           case 157: *r=0.998342; *g=0.361900; *b=0.000000; break;
           case 158: *r=0.996683; *g=0.329000; *b=0.000000; break;
-          case 160: *r=0.995025; *g=0.296100; *b=0.000000; break;
-          case 161: *r=0.993367; *g=0.263200; *b=0.000000; break;
-          case 162: *r=0.991708; *g=0.230300; *b=0.000000; break;
-          case 163: *r=0.990050; *g=0.197400; *b=0.000000; break;
-          case 164: *r=0.988392; *g=0.164500; *b=0.000000; break;
-          case 165: *r=0.986733; *g=0.131600; *b=0.000000; break;
-          case 166: *r=0.985075; *g=0.098700; *b=0.000000; break;
-          case 167: *r=0.983417; *g=0.065800; *b=0.000000; break;
-          case 168: *r=0.981758; *g=0.032900; *b=0.000000; break;
-          case 169: *r=0.980100; *g=0.000000; *b=0.000000; break;
-          case 170: *r=0.955925; *g=0.000000; *b=0.000000; break;
-          case 171: *r=0.931750; *g=0.000000; *b=0.000000; break;
-          case 172: *r=0.907575; *g=0.000000; *b=0.000000; break;
-          case 173: *r=0.883400; *g=0.000000; *b=0.000000; break;
-          case 174: *r=0.859225; *g=0.000000; *b=0.000000; break;
-          case 175: *r=0.835050; *g=0.000000; *b=0.000000; break;
-          case 176: *r=0.810875; *g=0.000000; *b=0.000000; break;
-          case 177: *r=0.786700; *g=0.000000; *b=0.000000; break;
-          case 178: *r=0.762525; *g=0.000000; *b=0.000000; break;
-          case 179: *r=0.738350; *g=0.000000; *b=0.000000; break;
-          case 180: *r=0.714175; *g=0.000000; *b=0.000000; break;
-          case 181: *r=0.690000; *g=0.000000; *b=0.000000; break;
-          case 182: *r=0.715833; *g=0.083333; *b=0.083333; break;
-          case 183: *r=0.741667; *g=0.166667; *b=0.166667; break;
-          case 184: *r=0.767500; *g=0.250000; *b=0.250000; break;
-          case 185: *r=0.793333; *g=0.333333; *b=0.333333; break;
-          case 186: *r=0.819167; *g=0.416667; *b=0.416667; break;
-          case 187: *r=0.845000; *g=0.500000; *b=0.500000; break;
-          case 188: *r=0.870833; *g=0.583333; *b=0.583333; break;
-          case 189: *r=0.896667; *g=0.666667; *b=0.666667; break;
-          case 190: *r=0.922500; *g=0.750000; *b=0.750000; break;
-          case 191: *r=0.948333; *g=0.833333; *b=0.833333; break;
-          case 192: *r=0.974167; *g=0.916667; *b=0.916667; break;
+          case 159: *r=0.995025; *g=0.296100; *b=0.000000; break;
+          case 160: *r=0.993367; *g=0.263200; *b=0.000000; break;
+          case 161: *r=0.991708; *g=0.230300; *b=0.000000; break;
+          case 162: *r=0.990050; *g=0.197400; *b=0.000000; break;
+          case 163: *r=0.988392; *g=0.164500; *b=0.000000; break;
+          case 164: *r=0.986733; *g=0.131600; *b=0.000000; break;
+          case 165: *r=0.985075; *g=0.098700; *b=0.000000; break;
+          case 166: *r=0.983417; *g=0.065800; *b=0.000000; break;
+          case 167: *r=0.981758; *g=0.032900; *b=0.000000; break;
+          case 168: *r=0.980100; *g=0.000000; *b=0.000000; break;
+          case 169: *r=0.955925; *g=0.000000; *b=0.000000; break;
+          case 170: *r=0.931750; *g=0.000000; *b=0.000000; break;
+          case 171: *r=0.907575; *g=0.000000; *b=0.000000; break;
+          case 172: *r=0.883400; *g=0.000000; *b=0.000000; break;
+          case 173: *r=0.859225; *g=0.000000; *b=0.000000; break;
+          case 174: *r=0.835050; *g=0.000000; *b=0.000000; break;
+          case 175: *r=0.810875; *g=0.000000; *b=0.000000; break;
+          case 176: *r=0.786700; *g=0.000000; *b=0.000000; break;
+          case 177: *r=0.762525; *g=0.000000; *b=0.000000; break;
+          case 178: *r=0.738350; *g=0.000000; *b=0.000000; break;
+          case 179: *r=0.714175; *g=0.000000; *b=0.000000; break;
+          case 180: *r=0.690000; *g=0.000000; *b=0.000000; break;
+          case 181: *r=0.715833; *g=0.083333; *b=0.083333; break;
+          case 182: *r=0.741667; *g=0.166667; *b=0.166667; break;
+          case 183: *r=0.767500; *g=0.250000; *b=0.250000; break;
+          case 184: *r=0.793333; *g=0.333333; *b=0.333333; break;
+          case 185: *r=0.819167; *g=0.416667; *b=0.416667; break;
+          case 186: *r=0.845000; *g=0.500000; *b=0.500000; break;
+          case 187: *r=0.870833; *g=0.583333; *b=0.583333; break;
+          case 188: *r=0.896667; *g=0.666667; *b=0.666667; break;
+          case 189: *r=0.922500; *g=0.750000; *b=0.750000; break;
+          case 190: *r=0.948333; *g=0.833333; *b=0.833333; break;
+          case 191: *r=0.974167; *g=0.916667; *b=0.916667; break;
+          case 192: *r=1.000000; *g=1.000000; *b=1.000000; break;
           case 193: *r=1.000000; *g=1.000000; *b=1.000000; break;
           case 194: *r=1.000000; *g=1.000000; *b=1.000000; break;
           case 195: *r=1.000000; *g=1.000000; *b=1.000000; break;
diff --git a/bin/convertt/ui.c b/bin/convertt/ui.c
index 56f405f..616314f 100644
--- a/bin/convertt/ui.c
+++ b/bin/convertt/ui.c
@@ -110,6 +110,7 @@ ui_initialize_options(struct converttparams *p,
 
 
   /* Set the necessary common parameters structure. */
+  cp->program_struct     = p;
   cp->poptions           = program_options;
   cp->program_name       = PROGRAM_NAME;
   cp->program_exec       = PROGRAM_EXEC;
diff --git a/bin/convolve/Makefile.am b/bin/convolve/Makefile.am
index 72aeadd..a5c42a3 100644
--- a/bin/convolve/Makefile.am
+++ b/bin/convolve/Makefile.am
@@ -23,13 +23,17 @@
 AM_LDFLAGS = -L\$(top_builddir)/lib
 AM_CPPFLAGS = -I\$(top_srcdir)/bootstrapped/lib -I\$(top_srcdir)/lib
 
+if COND_NORPATH
+  MAYBE_NORPATH = $(CONFIG_LDADD)
+endif
 
 
 ## Program definition (name, linking, sources and headers)
 bin_PROGRAMS = astconvolve
 
 ## Reason for linking with `libgnu' described in `bin/TEMPLATE/Makefile.am'.
-astconvolve_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro
+astconvolve_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro \
+                    $(MAYBE_NORPATH)
 
 astconvolve_SOURCES = main.c ui.c convolve.c
 
diff --git a/bin/cosmiccal/Makefile.am b/bin/cosmiccal/Makefile.am
index 96cafdd..984a93a 100644
--- a/bin/cosmiccal/Makefile.am
+++ b/bin/cosmiccal/Makefile.am
@@ -23,13 +23,17 @@
 AM_LDFLAGS = -L\$(top_builddir)/lib
 AM_CPPFLAGS = -I\$(top_srcdir)/bootstrapped/lib -I\$(top_srcdir)/lib
 
+if COND_NORPATH
+  MAYBE_NORPATH = $(CONFIG_LDADD)
+endif
 
 
 ## Program definition (name, linking, sources and headers)
 bin_PROGRAMS = astcosmiccal
 
 ## Reason for linking with `libgnu' described in `bin/TEMPLATE/Makefile.am'.
-astcosmiccal_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro
+astcosmiccal_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro \
+                     $(MAYBE_NORPATH)
 
 astcosmiccal_SOURCES = main.c ui.c cosmiccal.c
 
diff --git a/bin/cosmiccal/args.h b/bin/cosmiccal/args.h
index d241ca0..1876746 100644
--- a/bin/cosmiccal/args.h
+++ b/bin/cosmiccal/args.h
@@ -301,6 +301,21 @@ struct argp_option program_options[] =
       GAL_OPTIONS_NOT_SET,
       ui_add_to_single_value,
     },
+    {
+      "lineatz",
+      UI_KEY_LINEATZ,
+      "STR/FLT",
+      0,
+      "Wavelength of given line at chosen redshift",
+      UI_GROUP_SPECIFIC,
+      &p->specific,
+      GAL_TYPE_STRING,
+      GAL_OPTIONS_RANGE_ANY,
+      GAL_OPTIONS_NOT_MANDATORY,
+      GAL_OPTIONS_NOT_SET,
+      ui_add_to_single_value,
+    },
+
 
 
     {0}
diff --git a/bin/cosmiccal/cosmiccal.c b/bin/cosmiccal/cosmiccal.c
index 9d52a29..a7d665f 100644
--- a/bin/cosmiccal/cosmiccal.c
+++ b/bin/cosmiccal/cosmiccal.c
@@ -255,6 +255,10 @@ cosmiccal(struct cosmiccalparams *p)
                                                         p->oradiation));
             break;
 
+          case UI_KEY_LINEATZ:
+            printf("%g ", gal_list_f64_pop(&p->specific_arg)*(1+p->redshift));
+            break;
+
           default:
             error(EXIT_FAILURE, 0, "%s: a bug! Please contact us at %s to "
                   "fix the problem. The code %d is not recognized as a "
diff --git a/bin/cosmiccal/main.h b/bin/cosmiccal/main.h
index 958a8db..0801aa6 100644
--- a/bin/cosmiccal/main.h
+++ b/bin/cosmiccal/main.h
@@ -55,6 +55,7 @@ struct cosmiccalparams
 
   /* Outputs. */
   gal_list_i32_t     *specific; /* Codes for single row calculations.   */
+  gal_list_f64_t *specific_arg; /* Possible arguments for single calcs. */
 
   /* Internal: */
   time_t               rawtime; /* Starting time of the program.        */
diff --git a/bin/cosmiccal/ui.c b/bin/cosmiccal/ui.c
index 8e31895..0340ae7 100644
--- a/bin/cosmiccal/ui.c
+++ b/bin/cosmiccal/ui.c
@@ -106,6 +106,7 @@ ui_initialize_options(struct cosmiccalparams *p,
   struct gal_options_common_params *cp=&p->cp;
 
   /* Set the necessary common parameters structure. */
+  cp->program_struct     = p;
   cp->poptions           = program_options;
   cp->program_name       = PROGRAM_NAME;
   cp->program_exec       = PROGRAM_EXEC;
@@ -199,6 +200,10 @@ static void *
 ui_add_to_single_value(struct argp_option *option, char *arg,
                       char *filename, size_t lineno, void *params)
 {
+  int linecode;
+  double *dptr, val=NAN;
+  struct cosmiccalparams *p = (struct cosmiccalparams *)params;
+
   /* In case of printing the option values. */
   if(lineno==-1)
     error(EXIT_FAILURE, 0, "currently the options to be printed in one row "
@@ -211,18 +216,43 @@ ui_add_to_single_value(struct argp_option *option, char 
*arg,
 
   /* If this option is given in a configuration file, then `arg' will not
      be NULL and we don't want to do anything if it is `0'. */
-  if(arg)
+  switch(option->key)
     {
-      /* Make sure the value is only `0' or `1'. */
-      if( arg[1]!='\0' && *arg!='0' && *arg!='1' )
-        error_at_line(EXIT_FAILURE, 0, filename, lineno, "the `--%s' "
-                      "option takes no arguments. In a configuration "
-                      "file it can only have the values `1' or `0', "
-                      "indicating if it should be used or not",
-                      option->name);
-
-      /* Only proceed if the (possibly given) argument is 1. */
-      if(arg[0]=='0' && arg[1]=='\0') return NULL;
+    /* Options with arguments. */
+    case UI_KEY_LINEATZ:
+      /* Make sure an argument is given. */
+      if(arg==NULL)
+        error(EXIT_FAILURE, 0, "option `--lineatz' needs an argument");
+
+      /* If the argument is a number, read it, if not, see if its a known
+         specral line name. */
+      dptr=&val;
+      if( gal_type_from_string((void **)(&dptr), arg, GAL_TYPE_FLOAT64) )
+        {
+          linecode=gal_speclines_line_code(arg);
+          if(linecode==GAL_SPECLINES_INVALID)
+            error(EXIT_FAILURE, 0, "`%s' not a known spectral line name",
+                  arg);
+          val=gal_speclines_line_angstrom(linecode);
+        }
+      gal_list_f64_add(&p->specific_arg, val);
+      break;
+
+    /* Options without arguments. */
+    default:
+      if(arg)
+        {
+          /* Make sure the value is only `0' or `1'. */
+          if( arg[1]!='\0' && *arg!='0' && *arg!='1' )
+            error_at_line(EXIT_FAILURE, 0, filename, lineno, "the `--%s' "
+                          "option takes no arguments. In a configuration "
+                          "file it can only have the values `1' or `0', "
+                          "indicating if it should be used or not",
+                          option->name);
+
+          /* Only proceed if the (possibly given) argument is 1. */
+          if(arg[0]=='0' && arg[1]=='\0') return NULL;
+        }
     }
 
   /* Add this option to the print list and return. */
@@ -279,10 +309,10 @@ ui_parse_obsline(struct argp_option *option, char *arg,
       obsline=gal_options_parse_list_of_numbers(arg, filename, lineno);
 
       /* Only one number must be given as second argument. */
-      if(obsline->size!=1)
-        error(EXIT_FAILURE, 0, "too many values (%zu) given to `--obsline'. "
-              "Only two values (line name/wavelengh, and observed wavelengh) "
-              "must be given", obsline->size+1);
+      if(obsline==NULL || obsline->size!=1)
+        error(EXIT_FAILURE, 0, "Wrong format given to `--obsline'. Only "
+              "two values (line name/wavelengh, and observed wavelengh) "
+              "must be given to it");
 
       /* If a wavelength is given directly as a number (not a name), then
          put that number in a second element of the array. */
@@ -408,6 +438,7 @@ ui_preparations(struct cosmiccalparams *p)
      control reaches here, the list is finalized. So we should just reverse
      it so the user gets values in the same order they requested them. */
   gal_list_i32_reverse(&p->specific);
+  gal_list_f64_reverse(&p->specific_arg);
 }
 
 
diff --git a/bin/cosmiccal/ui.h b/bin/cosmiccal/ui.h
index dcc57f9..0b36e52 100644
--- a/bin/cosmiccal/ui.h
+++ b/bin/cosmiccal/ui.h
@@ -42,7 +42,7 @@ enum program_args_groups
 
 /* Available letters for short options:
 
-   f i j k n p t w x y
+   f j k n p t w x y
    B E J Q R W X Y
 */
 enum option_keys_enum
@@ -68,6 +68,7 @@ enum option_keys_enum
   UI_KEY_LOOKBACKTIME        = 'b',
   UI_KEY_CRITICALDENSITY     = 'c',
   UI_KEY_VOLUME              = 'v',
+  UI_KEY_LINEATZ             = 'i',
 
   /* Only with long version (start with a value 1000, the rest will be set
      automatically). */
diff --git a/bin/crop/Makefile.am b/bin/crop/Makefile.am
index e6a86a2..b644d67 100644
--- a/bin/crop/Makefile.am
+++ b/bin/crop/Makefile.am
@@ -23,13 +23,17 @@
 AM_LDFLAGS = -L\$(top_builddir)/lib
 AM_CPPFLAGS = -I\$(top_srcdir)/bootstrapped/lib -I\$(top_srcdir)/lib
 
+if COND_NORPATH
+  MAYBE_NORPATH = $(CONFIG_LDADD)
+endif
 
 
 ## Program definition (name, linking, sources and headers)
 bin_PROGRAMS = astcrop
 
 ## Reason for linking with `libgnu' described in `bin/TEMPLATE/Makefile.am'.
-astcrop_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro
+astcrop_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro \
+                $(MAYBE_NORPATH)
 
 astcrop_SOURCES = main.c ui.c crop.c wcsmode.c onecrop.c
 
diff --git a/bin/crop/ui.c b/bin/crop/ui.c
index 939826b..e2dd9de 100644
--- a/bin/crop/ui.c
+++ b/bin/crop/ui.c
@@ -112,6 +112,7 @@ ui_initialize_options(struct cropparams *p,
 
 
   /* Set the necessary common parameters structure. */
+  cp->program_struct     = p;
   cp->program_name       = PROGRAM_NAME;
   cp->program_exec       = PROGRAM_EXEC;
   cp->program_bibtex     = PROGRAM_BIBTEX;
@@ -937,6 +938,7 @@ ui_preparations(struct cropparams *p)
       if(p->mode==IMGCROP_MODE_WCS) wcsmode_check_prepare(p, img);
     }
 
+
   /* Polygon cropping is currently only supported on 2D */
   if(p->imgs->ndim!=2 && p->polygon)
     error(EXIT_FAILURE, 0, "%s: polygon cropping is currently only "
diff --git a/bin/fits/Makefile.am b/bin/fits/Makefile.am
index 8d7d0e8..a112c75 100644
--- a/bin/fits/Makefile.am
+++ b/bin/fits/Makefile.am
@@ -23,13 +23,17 @@
 AM_LDFLAGS = -L\$(top_builddir)/lib
 AM_CPPFLAGS = -I\$(top_srcdir)/bootstrapped/lib -I\$(top_srcdir)/lib
 
+if COND_NORPATH
+  MAYBE_NORPATH = $(CONFIG_LDADD)
+endif
 
 
 ## Program definition (name, linking, sources and headers)
 bin_PROGRAMS = astfits
 
 ## Reason for linking with `libgnu' described in `bin/TEMPLATE/Makefile.am'.
-astfits_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro
+astfits_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro \
+                $(MAYBE_NORPATH)
 
 astfits_SOURCES = main.c ui.c extension.c fits.c keywords.c
 
diff --git a/bin/fits/ui.c b/bin/fits/ui.c
index 1d69fd4..ca86eee 100644
--- a/bin/fits/ui.c
+++ b/bin/fits/ui.c
@@ -99,6 +99,7 @@ ui_initialize_options(struct fitsparams *p,
 
   /* Set the necessary common parameters structure. */
   cp->keep               = 1;
+  cp->program_struct     = p;
   cp->poptions           = program_options;
   cp->program_name       = PROGRAM_NAME;
   cp->program_exec       = PROGRAM_EXEC;
diff --git a/bin/gnuastro.conf b/bin/gnuastro.conf
index 1ed1845..115b78e 100644
--- a/bin/gnuastro.conf
+++ b/bin/gnuastro.conf
@@ -39,4 +39,4 @@
 
 # Operating mode
  quietmmap        0
- minmapsize       2000000000
\ No newline at end of file
+ minmapsize       2000000000
diff --git a/bin/match/Makefile.am b/bin/match/Makefile.am
index c11c58d..9e9fe33 100644
--- a/bin/match/Makefile.am
+++ b/bin/match/Makefile.am
@@ -23,13 +23,17 @@
 AM_LDFLAGS = -L\$(top_builddir)/lib
 AM_CPPFLAGS = -I\$(top_srcdir)/bootstrapped/lib -I\$(top_srcdir)/lib
 
+if COND_NORPATH
+  MAYBE_NORPATH = $(CONFIG_LDADD)
+endif
 
 
 ## Program definition (name, linking, sources and headers)
 bin_PROGRAMS = astmatch
 
 ## Reason for linking with `libgnu' described in `bin/TEMPLATE/Makefile.am'.
-astmatch_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro
+astmatch_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro \
+                 $(MAYBE_NORPATH)
 
 astmatch_SOURCES = main.c ui.c match.c
 
diff --git a/bin/match/ui.c b/bin/match/ui.c
index d188f2c..dacb0d3 100644
--- a/bin/match/ui.c
+++ b/bin/match/ui.c
@@ -100,6 +100,7 @@ ui_initialize_options(struct matchparams *p,
 
 
   /* Set the necessary common parameters structure. */
+  cp->program_struct     = p;
   cp->poptions           = program_options;
   cp->program_name       = PROGRAM_NAME;
   cp->program_exec       = PROGRAM_EXEC;
diff --git a/bin/mkcatalog/Makefile.am b/bin/mkcatalog/Makefile.am
index b834dff..19b30f3 100644
--- a/bin/mkcatalog/Makefile.am
+++ b/bin/mkcatalog/Makefile.am
@@ -23,13 +23,17 @@
 AM_LDFLAGS = -L\$(top_builddir)/lib
 AM_CPPFLAGS = -I\$(top_srcdir)/bootstrapped/lib -I\$(top_srcdir)/lib
 
+if COND_NORPATH
+  MAYBE_NORPATH = $(CONFIG_LDADD)
+endif
 
 
 ## Program definition (name, linking, sources and headers)
 bin_PROGRAMS = astmkcatalog
 
 ## Reason for linking with `libgnu' described in `bin/TEMPLATE/Makefile.am'.
-astmkcatalog_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro
+astmkcatalog_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro \
+                     $(MAYBE_NORPATH)
 
 astmkcatalog_SOURCES = main.c ui.c mkcatalog.c columns.c upperlimit.c parse.c
 
diff --git a/bin/mknoise/Makefile.am b/bin/mknoise/Makefile.am
index 6f09968..555f7fd 100644
--- a/bin/mknoise/Makefile.am
+++ b/bin/mknoise/Makefile.am
@@ -23,13 +23,17 @@
 AM_LDFLAGS = -L\$(top_builddir)/lib
 AM_CPPFLAGS = -I\$(top_srcdir)/bootstrapped/lib -I\$(top_srcdir)/lib
 
+if COND_NORPATH
+  MAYBE_NORPATH = $(CONFIG_LDADD)
+endif
 
 
 ## Program definition (name, linking, sources and headers)
 bin_PROGRAMS = astmknoise
 
 ## Reason for linking with `libgnu' described in `bin/TEMPLATE/Makefile.am'.
-astmknoise_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro
+astmknoise_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro \
+                   $(MAYBE_NORPATH)
 
 astmknoise_SOURCES = main.c ui.c mknoise.c
 
diff --git a/bin/mknoise/ui.c b/bin/mknoise/ui.c
index c30cd7b..b1f775d 100644
--- a/bin/mknoise/ui.c
+++ b/bin/mknoise/ui.c
@@ -104,6 +104,7 @@ ui_initialize_options(struct mknoiseparams *p,
 
 
   /* Set the necessary common parameters structure. */
+  cp->program_struct     = p;
   cp->program_name       = PROGRAM_NAME;
   cp->program_exec       = PROGRAM_EXEC;
   cp->program_bibtex     = PROGRAM_BIBTEX;
@@ -437,6 +438,7 @@ ui_free_report(struct mknoiseparams *p, struct timeval *t1)
   /* Free the allocated arrays: */
   free(p->cp.hdu);
   free(p->cp.output);
+  gsl_rng_free(p->rng);
   gal_data_free(p->input);
 
   /* Print the final message. */
diff --git a/bin/mkprof/Makefile.am b/bin/mkprof/Makefile.am
index cfaa9ea..884a270 100644
--- a/bin/mkprof/Makefile.am
+++ b/bin/mkprof/Makefile.am
@@ -23,13 +23,17 @@
 AM_LDFLAGS = -L\$(top_builddir)/lib
 AM_CPPFLAGS = -I\$(top_srcdir)/bootstrapped/lib -I\$(top_srcdir)/lib
 
+if COND_NORPATH
+  MAYBE_NORPATH = $(CONFIG_LDADD)
+endif
 
 
 ## Program definition (name, linking, sources and headers)
 bin_PROGRAMS = astmkprof
 
 ## Reason for linking with `libgnu' described in `bin/TEMPLATE/Makefile.am'.
-astmkprof_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro
+astmkprof_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro \
+                  $(MAYBE_NORPATH)
 
 astmkprof_SOURCES = main.c ui.c mkprof.c oneprofile.c profiles.c
 
diff --git a/bin/mkprof/ui.c b/bin/mkprof/ui.c
index f7b4fc0..ce000ab 100644
--- a/bin/mkprof/ui.c
+++ b/bin/mkprof/ui.c
@@ -178,6 +178,7 @@ ui_initialize_options(struct mkprofparams *p,
   struct gal_options_common_params *cp=&p->cp;
 
   /* Set the necessary common parameters structure. */
+  cp->program_struct     = p;
   cp->program_name       = PROGRAM_NAME;
   cp->program_exec       = PROGRAM_EXEC;
   cp->program_bibtex     = PROGRAM_BIBTEX;
diff --git a/bin/noisechisel/Makefile.am b/bin/noisechisel/Makefile.am
index 873a112..86633b1 100644
--- a/bin/noisechisel/Makefile.am
+++ b/bin/noisechisel/Makefile.am
@@ -23,13 +23,17 @@
 AM_LDFLAGS = -L\$(top_builddir)/lib
 AM_CPPFLAGS = -I\$(top_srcdir)/bootstrapped/lib -I\$(top_srcdir)/lib
 
+if COND_NORPATH
+  MAYBE_NORPATH = $(CONFIG_LDADD)
+endif
 
 
 ## Program definition (name, linking, sources and headers)
 bin_PROGRAMS = astnoisechisel
 
 ## Reason for linking with `libgnu' described in `bin/TEMPLATE/Makefile.am'.
-astnoisechisel_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro
+astnoisechisel_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la \
+                       -lgnuastro $(MAYBE_NORPATH)
 
 astnoisechisel_SOURCES = main.c ui.c detection.c noisechisel.c sky.c     \
   threshold.c
diff --git a/bin/noisechisel/detection.c b/bin/noisechisel/detection.c
index e8895ce..6baa8c4 100644
--- a/bin/noisechisel/detection.c
+++ b/bin/noisechisel/detection.c
@@ -991,9 +991,11 @@ detection_remove_false_initial(struct noisechiselparams *p,
       e_th=p->exp_thresh_full->array;
       do                                    /* Growth is necessary later.  */
         {                                   /* So there is no need to set  */
-          if(*l!=GAL_BLANK_INT32)           /* the labels image, but we    */
-            {                               /* have to count the number of */
-              *b = newlabels[ *l ] > 0;     /* pixels to (possibly) grow.  */
+          if(*l==GAL_BLANK_INT32)           /* the labels image, but we    */
+            *b=GAL_BLANK_UINT8;             /* have to count the number of */
+          else                              /* pixels to (possibly) grow.  */
+            {
+              *b = newlabels[ *l ] > 0;
               if( *b==0 && *arr>*e_th )
                 ++p->numexpand;
             }
@@ -1001,11 +1003,11 @@ detection_remove_false_initial(struct noisechiselparams 
*p,
         }
       while(++l<lf);
 
+
       /* If there aren't any pixels to later expand, then reset the labels
          (remove false detections in the labeled image). */
       if(p->numexpand==0)
         {
-          b=workbin->array;
           l=p->olabel->array;
           do if(*l!=GAL_BLANK_INT32) *l = newlabels[ *l ]; while(++l<lf);
         }
diff --git a/bin/noisechisel/ui.c b/bin/noisechisel/ui.c
index 017daf3..e930a0e 100644
--- a/bin/noisechisel/ui.c
+++ b/bin/noisechisel/ui.c
@@ -107,6 +107,7 @@ ui_initialize_options(struct noisechiselparams *p,
 
 
   /* Set the necessary common parameters structure. */
+  cp->program_struct     = p;
   cp->poptions           = program_options;
   cp->program_name       = PROGRAM_NAME;
   cp->program_exec       = PROGRAM_EXEC;
diff --git a/bin/segment/Makefile.am b/bin/segment/Makefile.am
index ea56135..56bb770 100644
--- a/bin/segment/Makefile.am
+++ b/bin/segment/Makefile.am
@@ -23,13 +23,17 @@
 AM_LDFLAGS = -L\$(top_builddir)/lib
 AM_CPPFLAGS = -I\$(top_srcdir)/bootstrapped/lib -I\$(top_srcdir)/lib
 
+if COND_NORPATH
+  MAYBE_NORPATH = $(CONFIG_LDADD)
+endif
 
 
 ## Program definition (name, linking, sources and headers)
 bin_PROGRAMS = astsegment
 
 ## Reason for linking with `libgnu' described in `bin/TEMPLATE/Makefile.am'.
-astsegment_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro
+astsegment_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro \
+                   $(MAYBE_NORPATH)
 
 astsegment_SOURCES = main.c ui.c segment.c clumps.c
 
diff --git a/bin/segment/ui.c b/bin/segment/ui.c
index 55d591d..77b1a44 100644
--- a/bin/segment/ui.c
+++ b/bin/segment/ui.c
@@ -107,6 +107,7 @@ ui_initialize_options(struct segmentparams *p,
 
 
   /* Set the necessary common parameters structure. */
+  cp->program_struct     = p;
   cp->poptions           = program_options;
   cp->program_name       = PROGRAM_NAME;
   cp->program_exec       = PROGRAM_EXEC;
@@ -405,7 +406,7 @@ static void
 ui_prepare_inputs(struct segmentparams *p)
 {
   int32_t *i, *ii;
-  gal_data_t *maxd, *ccin, *ccout=NULL;
+  gal_data_t *maxd, *ccin, *blankflag, *ccout=NULL;
 
   /* Read the input as a single precision floating point dataset. */
   p->input = gal_array_read_one_ch_to_type(p->inputname, p->cp.hdu,
@@ -483,6 +484,16 @@ ui_prepare_inputs(struct segmentparams *p)
               p->dhdu, gal_type_name(p->olabel->type, 1),
               p->useddetectionname, p->dhdu);
 
+      /* If the input has blank values, set them to blank values in the
+         labeled image too. It doesn't matter if the labeled image has
+         blank pixels that aren't blank on the input image. */
+      if(gal_blank_present(p->input, 1))
+        {
+          blankflag=gal_blank_flag(p->input);
+          gal_blank_flag_apply(p->olabel, blankflag);
+          gal_data_free(blankflag);
+        }
+
       /* Get the maximum value of the input (total number of labels if they
          are separate). If the maximum is 1 (the image is a binary image),
          then apply the connected components algorithm to separate the
diff --git a/bin/statistics/Makefile.am b/bin/statistics/Makefile.am
index 972cc54..4106976 100644
--- a/bin/statistics/Makefile.am
+++ b/bin/statistics/Makefile.am
@@ -23,13 +23,17 @@
 AM_LDFLAGS = -L\$(top_builddir)/lib
 AM_CPPFLAGS = -I\$(top_srcdir)/bootstrapped/lib -I\$(top_srcdir)/lib
 
+if COND_NORPATH
+  MAYBE_NORPATH = $(CONFIG_LDADD)
+endif
 
 
 ## Program definition (name, linking, sources and headers)
 bin_PROGRAMS = aststatistics
 
 ## Reason for linking with `libgnu' described in `bin/TEMPLATE/Makefile.am'.
-aststatistics_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro
+aststatistics_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la \
+                      -lgnuastro $(MAYBE_NORPATH)
 
 aststatistics_SOURCES = main.c ui.c sky.c statistics.c
 
diff --git a/bin/table/Makefile.am b/bin/table/Makefile.am
index 244830a..8b4dcdb 100644
--- a/bin/table/Makefile.am
+++ b/bin/table/Makefile.am
@@ -23,13 +23,17 @@
 AM_LDFLAGS = -L\$(top_builddir)/lib
 AM_CPPFLAGS = -I\$(top_srcdir)/bootstrapped/lib -I\$(top_srcdir)/lib
 
+if COND_NORPATH
+  MAYBE_NORPATH = $(CONFIG_LDADD)
+endif
 
 
 ## Program definition (name, linking, sources and headers)
 bin_PROGRAMS = asttable
 
 ## Reason for linking with `libgnu' described in `bin/TEMPLATE/Makefile.am'.
-asttable_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro
+asttable_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro \
+                 $(MAYBE_NORPATH)
 
 asttable_SOURCES = main.c ui.c arithmetic.c table.c
 
diff --git a/bin/table/args.h b/bin/table/args.h
index 144616c..2f2506e 100644
--- a/bin/table/args.h
+++ b/bin/table/args.h
@@ -102,13 +102,24 @@ struct argp_option program_options[] =
       GAL_OPTIONS_NOT_MANDATORY,
       GAL_OPTIONS_NOT_SET
     },
+
+
+
+
+
+    /* Output Rows */
+    {
+      0, 0, 0, 0,
+      "Rows in output:",
+      UI_GROUP_OUTROWS
+    },
     {
       "range",
       UI_KEY_RANGE,
       "STR,FLT:FLT",
       0,
       "Column, and range to limit output.",
-      GAL_OPTIONS_GROUP_OUTPUT,
+      UI_GROUP_OUTROWS,
       &p->range,
       GAL_TYPE_STRING,
       GAL_OPTIONS_RANGE_ANY,
@@ -117,12 +128,40 @@ struct argp_option program_options[] =
       gal_options_parse_name_and_values
     },
     {
+      "equal",
+      UI_KEY_EQUAL,
+      "STR,FLT,FLT",
+      0,
+      "Column, values to keep in output.",
+      UI_GROUP_OUTROWS,
+      &p->equal,
+      GAL_TYPE_STRING,
+      GAL_OPTIONS_RANGE_ANY,
+      GAL_OPTIONS_NOT_MANDATORY,
+      GAL_OPTIONS_NOT_SET,
+      gal_options_parse_name_and_values
+    },
+    {
+      "notequal",
+      UI_KEY_NOTEQUAL,
+      "STR,FLT,FLT",
+      0,
+      "Column, values to remove from output.",
+      UI_GROUP_OUTROWS,
+      &p->notequal,
+      GAL_TYPE_STRING,
+      GAL_OPTIONS_RANGE_ANY,
+      GAL_OPTIONS_NOT_MANDATORY,
+      GAL_OPTIONS_NOT_SET,
+      gal_options_parse_name_and_values
+    },
+    {
       "sort",
       UI_KEY_SORT,
       "STR,INT",
       0,
       "Column name or number for sorting.",
-      GAL_OPTIONS_GROUP_OUTPUT,
+      UI_GROUP_OUTROWS,
       &p->sort,
       GAL_TYPE_STRING,
       GAL_OPTIONS_RANGE_ANY,
@@ -135,7 +174,7 @@ struct argp_option program_options[] =
       0,
       0,
       "Sort in descending order: largets first.",
-      GAL_OPTIONS_GROUP_OUTPUT,
+      UI_GROUP_OUTROWS,
       &p->descending,
       GAL_OPTIONS_NO_ARG_TYPE,
       GAL_OPTIONS_RANGE_0_OR_1,
@@ -148,7 +187,7 @@ struct argp_option program_options[] =
       "INT",
       0,
       "Only output given number of top rows.",
-      GAL_OPTIONS_GROUP_OUTPUT,
+      UI_GROUP_OUTROWS,
       &p->head,
       GAL_TYPE_SIZE_T,
       GAL_OPTIONS_RANGE_GE_0,
@@ -161,7 +200,7 @@ struct argp_option program_options[] =
       "INT",
       0,
       "Only output given number of bottom rows.",
-      GAL_OPTIONS_GROUP_OUTPUT,
+      UI_GROUP_OUTROWS,
       &p->tail,
       GAL_TYPE_SIZE_T,
       GAL_OPTIONS_RANGE_GE_0,
@@ -172,7 +211,6 @@ struct argp_option program_options[] =
 
 
 
-
     /* End. */
     {0}
   };
diff --git a/bin/table/main.h b/bin/table/main.h
index 44694f6..0a55512 100644
--- a/bin/table/main.h
+++ b/bin/table/main.h
@@ -33,14 +33,26 @@ along with Gnuastro. If not, see 
<http://www.gnu.org/licenses/>.
 #define PROGRAM_EXEC   "asttable"      /* Program executable name. */
 #define PROGRAM_STRING PROGRAM_NAME" (" PACKAGE_NAME ") " PACKAGE_VERSION
 
+/* Row selection types. */
+enum select_types
+{
+ /* Different types of row-selection */
+ SELECT_TYPE_RANGE,             /* 0 by C standard */
+ SELECT_TYPE_EQUAL,
+ SELECT_TYPE_NOTEQUAL,
+
+ /* This marks the total number of row-selection criteria. */
+ SELECT_TYPE_NUMBER,
+};
 
 
 
 /* Basic structure. */
-struct list_range
+struct list_select
 {
-  gal_data_t           *v;
-  struct list_range *next;
+  gal_data_t          *col;
+  int                 type;
+  struct list_select *next;
 };
 
 struct arithmetic_token
@@ -77,6 +89,8 @@ struct tableparams
   uint8_t         information;  /* ==1: only print FITS information.    */
   uint8_t     colinfoinstdout;  /* ==1: print column metadata in CL.    */
   gal_data_t           *range;  /* Range to limit output.               */
+  gal_data_t           *equal;  /* Values to keep in output.            */
+  gal_data_t        *notequal;  /* Values to not include in output.     */
   char                  *sort;  /* Column name or number for sorting.   */
   uint8_t          descending;  /* Sort columns in descending order.    */
   size_t                 head;  /* Output only the no. of top rows.     */
@@ -89,9 +103,11 @@ struct tableparams
   int                    nwcs;  /* Number of WCS structures.            */
   gal_data_t      *allcolinfo;  /* Information of all the columns.      */
   gal_data_t         *sortcol;  /* Column to define a sorting.          */
-  struct list_range *rangecol;  /* Column to define a range.            */
+  int               selection;  /* Any row-selection is requested.      */
+  gal_data_t          *select;  /* Select rows for output.              */
+  struct list_select *selectcol; /* Column to define a range.           */
   uint8_t            freesort;  /* If the sort column should be freed.  */
-  uint8_t          *freerange;  /* If the range column should be freed. */
+  uint8_t         *freeselect;  /* If selection columns should be freed.*/
   uint8_t              sortin;  /* If the sort column is in the output. */
   time_t              rawtime;  /* Starting time of the program.        */
   gal_data_t       **colarray;  /* Array of columns, with arithmetic.   */
diff --git a/bin/table/table.c b/bin/table/table.c
index 6051c38..7db80f7 100644
--- a/bin/table/table.c
+++ b/bin/table/table.c
@@ -75,81 +75,207 @@ table_apply_permutation(gal_data_t *table, size_t 
*permutation,
 
 
 
-static void
-table_range(struct tableparams *p)
+static gal_data_t *
+table_selection_range(struct tableparams *p, gal_data_t *col)
 {
-  uint8_t *u;
-  double *rarr;
-  gal_data_t *mask;
-  struct list_range *tmp;
-  gal_data_t *ref, *perm, *range, *blmask;
-  size_t i, g, b, *s, *sf, one=1, ngood=0;
-  gal_data_t *min, *max, *ltmin, *gemax, *sum;
-
+  size_t one=1;
+  double *darr;
   int numok=GAL_ARITHMETIC_NUMOK;
   int inplace=GAL_ARITHMETIC_INPLACE;
+  gal_data_t *min=NULL, *max=NULL, *tmp, *ltmin, *gemax=NULL;
 
-  /* Allocate datasets for the necessary numbers and write them in. */
+  /* First, make sure everything is OK. */
+  if(p->range==NULL)
+    error(EXIT_FAILURE, 0, "%s: a bug! Please contact us to fix the "
+          "problem at %s. `p->range' should not be NULL at this point",
+          __func__, PACKAGE_BUGREPORT);
+
+  /* Allocations. */
   min=gal_data_alloc(NULL, GAL_TYPE_FLOAT64, 1, &one, NULL, 0, -1, 1,
                      NULL, NULL, NULL);
   max=gal_data_alloc(NULL, GAL_TYPE_FLOAT64, 1, &one, NULL, 0, -1, 1,
                      NULL, NULL, NULL);
+
+  /* Read the range of values for this column. */
+  darr=p->range->array;
+  ((double *)(min->array))[0] = darr[0];
+  ((double *)(max->array))[0] = darr[1];
+
+  /* Move `p->range' to the next element in the list and free the current
+     one (we have already read its values and don't need it any more). */
+  tmp=p->range;
+  p->range=p->range->next;
+  gal_data_free(tmp);
+
+  /* Find all the elements outside this range (smaller than the minimum,
+     larger than the maximum or blank) as separate binary flags.. */
+  ltmin=gal_arithmetic(GAL_ARITHMETIC_OP_LT, 1, numok, col, min);
+  gemax=gal_arithmetic(GAL_ARITHMETIC_OP_GE, 1, numok, col, max);
+
+  /* Merge them both into one array. */
+  ltmin=gal_arithmetic(GAL_ARITHMETIC_OP_OR, 1, inplace, ltmin, gemax);
+
+  /* For a check.
+  {
+    size_t i;
+    uint8_t *u=ltmin->array;
+    for(i=0;i<ltmin->size;++i) printf("%zu: %u\n", i, u[i]);
+    exit(0);
+  }
+  */
+
+  /* Clean up and return. */
+  gal_data_free(gemax);
+  gal_data_free(min);
+  gal_data_free(max);
+  return ltmin;
+}
+
+
+
+
+
+static gal_data_t *
+table_selection_equal_or_notequal(struct tableparams *p, gal_data_t *col,
+                                  int e0n1)
+{
+  double *darr;
+  size_t i, one=1;
+  int numok=GAL_ARITHMETIC_NUMOK;
+  int inplace=GAL_ARITHMETIC_INPLACE;
+  gal_data_t *eq, *out=NULL, *value=NULL;
+  gal_data_t *arg = e0n1 ? p->notequal : p->equal;
+
+  /* Note that this operator is used to make the "masked" array, so when
+     `e0n1==0' the operator should be `GAL_ARITHMETIC_OP_NE' and
+     vice-versa.
+
+     For the merging with other elements, when `e0n1==0', we need the
+     `GAL_ARITHMETIC_OP_AND', but for `e0n1==1', it should be `OR'. */
+  int mergeop  = e0n1 ? GAL_ARITHMETIC_OP_OR : GAL_ARITHMETIC_OP_AND;
+  int operator = e0n1 ? GAL_ARITHMETIC_OP_EQ : GAL_ARITHMETIC_OP_NE;
+
+  /* First, make sure everything is OK. */
+  if(arg==NULL)
+    error(EXIT_FAILURE, 0, "%s: a bug! Please contact us to fix the "
+          "problem at %s. `p->range' should not be NULL at this point",
+          __func__, PACKAGE_BUGREPORT);
+
+  /* Allocate space for the value. */
+  value=gal_data_alloc(NULL, GAL_TYPE_FLOAT64, 1, &one, NULL, 0, -1, 1,
+                     NULL, NULL, NULL);
+
+  /* Go through the values given to this call of the option and flag the
+     elements. */
+  for(i=0;i<arg->size;++i)
+    {
+      darr=arg->array;
+      ((double *)(value->array))[0] = darr[i];
+      eq=gal_arithmetic(operator, 1, numok, col, value);
+      if(out)
+        {
+          out=gal_arithmetic(mergeop, 1, inplace, out, eq);
+          gal_data_free(eq);
+        }
+      else
+        out=eq;
+    }
+
+  /* For a check.
+  {
+    uint8_t *u=out->array;
+    for(i=0;i<out->size;++i) printf("%zu: %u\n", i, u[i]);
+    exit(0);
+  }
+  */
+
+  /* Move the main pointer to the next possible call of the given
+     option. With this, we can safely free `arg' at this point. */
+  if(e0n1) p->notequal=p->notequal->next;
+  else     p->equal=p->equal->next;
+
+  /* Clean up and return. */
+  gal_data_free(value);
+  gal_data_free(arg);
+  return out;
+}
+
+
+
+
+
+static void
+table_selection(struct tableparams *p)
+{
+  uint8_t *u;
+  struct list_select *tmp;
+  gal_data_t *mask, *addmask=NULL;
+  gal_data_t *sum, *perm, *blmask;
+  size_t i, g, b, *s, *sf, ngood=0;
+  int inplace=GAL_ARITHMETIC_INPLACE;
+
+  /* Allocate datasets for the necessary numbers and write them in. */
   perm=gal_data_alloc(NULL, GAL_TYPE_SIZE_T, 1, p->table->dsize, NULL, 0,
                       p->cp.minmapsize, p->cp.quietmmap, NULL, NULL, NULL);
   mask=gal_data_alloc(NULL, GAL_TYPE_UINT8, 1, p->table->dsize, NULL, 1,
                       p->cp.minmapsize, p->cp.quietmmap, NULL, NULL, NULL);
 
-  /* Go over all the necessary range options. */
-  range=p->range;
-  for(tmp=p->rangecol;tmp!=NULL;tmp=tmp->next)
+  /* Go over each selection criteria and remove the necessary elements. */
+  for(tmp=p->selectcol;tmp!=NULL;tmp=tmp->next)
     {
-      /* Set the minimum and maximum values. */
-      rarr=range->array;
-      ((double *)(min->array))[0] = rarr[0];
-      ((double *)(max->array))[0] = rarr[1];
-
-      /* Set the reference column to read values from. */
-      ref=tmp->v;
-
-      /* Find all the bad elements (smaller than the minimum, larger than
-         the maximum or blank) so we can flag them. */
-      ltmin=gal_arithmetic(GAL_ARITHMETIC_OP_LT, 1, numok,   ref,   min);
-      gemax=gal_arithmetic(GAL_ARITHMETIC_OP_GE, 1, numok,   ref,   max);
-      blmask = ( gal_blank_present(ref, 1)
-                 ? gal_arithmetic(GAL_ARITHMETIC_OP_ISBLANK, 1, 0, ref)
-                 : NULL );
-
-      /* Merge all the flags into one array. */
-      ltmin=gal_arithmetic(GAL_ARITHMETIC_OP_OR, 1, inplace, ltmin, gemax);
-      if(blmask)
-        ltmin=gal_arithmetic(GAL_ARITHMETIC_OP_OR, 1, inplace, ltmin, blmask);
-
-      /* Add these flags to all previous flags. */
-      mask=gal_arithmetic(GAL_ARITHMETIC_OP_OR, 1, inplace, mask, ltmin);
+      switch(tmp->type)
+        {
+        case SELECT_TYPE_RANGE:
+          addmask=table_selection_range(p, tmp->col);
+          break;
+
+        case SELECT_TYPE_EQUAL:
+          addmask=table_selection_equal_or_notequal(p, tmp->col, 0);
+          break;
+
+        case SELECT_TYPE_NOTEQUAL:
+          addmask=table_selection_equal_or_notequal(p, tmp->col, 1);
+          break;
+
+        default:
+          error(EXIT_FAILURE, 0, "%s: a bug! Please contact us at %s "
+                "to fix the problem. The code %d is not a recognized "
+                "range identifier", __func__, PACKAGE_BUGREPORT,
+                tmp->type);
+        }
 
-      /* For a check.
-      {
-        float *f=ref->array;
-        uint8_t *m=mask->array;
-        uint8_t *u=ltmin->array, *uf=u+ltmin->size;
-        printf("\n\nInput column: %s\n", ref->name ? ref->name : "No Name");
-        printf("Range: %g, %g\n", rarr[0], rarr[1]);
-        printf("%-20s%-20s%-20s\n", "Value", "This mask",
-               "Including previous");
-        do printf("%-20f%-20u%-20u\n", *f++, *u++, *m++); while(u<uf);
-        exit(0);
-      }
-      */
+      /* Remove any blank elements. */
+      if(gal_blank_present(tmp->col, 1))
+        {
+          blmask = gal_arithmetic(GAL_ARITHMETIC_OP_ISBLANK, 1, 0, tmp->col);
+          addmask=gal_arithmetic(GAL_ARITHMETIC_OP_OR, 1, inplace,
+                                 addmask, blmask);
+          gal_data_free(blmask);
+        }
 
-      /* Clean up. */
-      gal_data_free(ltmin);
-      gal_data_free(gemax);
+      /* Add this mask array to the cumulative mask array (of all
+         selections). */
+      mask=gal_arithmetic(GAL_ARITHMETIC_OP_OR, 1, inplace, mask, addmask);
 
-      /* Increment pointers. */
-      range=range->next;
+      /* For a check.
+         {
+           float *f=ref->array;
+           uint8_t *m=mask->array;
+           uint8_t *u=addmask->array, *uf=u+addmask->size;
+           printf("\n\nInput column: %s\n", ref->name ? ref->name : "No Name");
+           printf("Range: %g, %g\n", rarr[0], rarr[1]);
+           printf("%-20s%-20s%-20s\n", "Value", "This mask",
+           "Including previous");
+           do printf("%-20f%-20u%-20u\n", *f++, *u++, *m++); while(u<uf);
+           exit(0);
+           }
+        */
+
+      /* Final clean up. */
+      gal_data_free(addmask);
     }
 
-  /* Count the number of bad elements. */
+  /* Find the final number of elements to print. */
   sum=gal_statistics_sum(mask);
   ngood = p->table->size - ((double *)(sum->array))[0];
 
@@ -185,15 +311,13 @@ table_range(struct tableparams *p)
 
   /* Clean up. */
   i=0;
-  for(tmp=p->rangecol;tmp!=NULL;tmp=tmp->next)
-    { if(p->freerange[i]) {gal_data_free(tmp->v); tmp->v=NULL;} ++i; }
-  ui_list_range_free(p->rangecol, 0);
+  for(tmp=p->selectcol;tmp!=NULL;tmp=tmp->next)
+    { if(p->freeselect[i]) {gal_data_free(tmp->col); tmp->col=NULL;} ++i; }
+  ui_list_select_free(p->selectcol, 0);
   gal_data_free(mask);
   gal_data_free(perm);
+  free(p->freeselect);
   gal_data_free(sum);
-  gal_data_free(min);
-  gal_data_free(max);
-  free(p->freerange);
 }
 
 
@@ -215,6 +339,22 @@ table_sort(struct tableparams *p)
                       p->cp.minmapsize, p->cp.quietmmap, NULL, NULL, NULL);
   sf=(s=perm->array)+perm->size; do *s=c++; while(++s<sf);
 
+  /* For string columns, print a descriptive message. Note that some FITS
+     tables were found that do actually have numbers stored in string
+     types! */
+  if(p->sortcol->type==GAL_TYPE_STRING)
+    error(EXIT_FAILURE, 0, "sort column has a string type, but it can "
+          "(currently) only work on numbers.\n\n"
+          "TIP: if you know the columns contents are all numbers that are "
+          "just stored as strings, you can use this program to save the "
+          "table as a text file, modify the column meta-data (for example "
+          "to type `i32' or `f32' instead of `strN'), then use this "
+          "program again to save it as a FITS table.\n\n"
+          "For more on column metadata in plain text format, please run "
+          "the following command (or see the `Gnuastro text table format "
+          "section of the book/manual):\n\n"
+          "    $ info gnuastro \"gnuastro text table format\"");
+
   /* Set the proper qsort function. */
   if(p->descending)
     switch(p->sortcol->type)
@@ -333,7 +473,7 @@ void
 table(struct tableparams *p)
 {
   /* Apply a certain range (if required) to the output sample. */
-  if(p->range) table_range(p);
+  if(p->selection) table_selection(p);
 
   /* Sort it (if required). */
   if(p->sort) table_sort(p);
diff --git a/bin/table/ui.c b/bin/table/ui.c
index 86cc27d..d12cc4b 100644
--- a/bin/table/ui.c
+++ b/bin/table/ui.c
@@ -110,6 +110,7 @@ ui_initialize_options(struct tableparams *p,
 
 
   /* Set the necessary common parameters structure. */
+  cp->program_struct     = p;
   cp->poptions           = program_options;
   cp->program_name       = PROGRAM_NAME;
   cp->program_exec       = PROGRAM_EXEC;
@@ -237,8 +238,9 @@ ui_read_check_only_options(struct tableparams *p)
       {
         /* Range needs two input numbers. */
         if(tmp->size!=2)
-          error(EXIT_FAILURE, 0, "two values (separated by comma) necessary "
-                "for `--range' in this format: `--range=COLUMN,min,max'");
+          error(EXIT_FAILURE, 0, "two values (separated by `:' or `,') are "
+                "necessary for `--range' in this format: "
+                "`--range=COLUMN,min:max'");
 
         /* The first must be smaller than the second. */
         darr=tmp->array;
@@ -247,6 +249,7 @@ ui_read_check_only_options(struct tableparams *p)
                 "be smaller than the second (%g)", darr[0], darr[1]);
       }
 
+
   /* Make sure `--head' and `--tail' aren't given together. */
   if(p->head!=GAL_BLANK_SIZE_T && p->tail!=GAL_BLANK_SIZE_T)
     error(EXIT_FAILURE, 0, "`--head' and `--tail' options cannot be "
@@ -291,19 +294,20 @@ ui_check_options_and_arguments(struct tableparams *p)
 
 
 /**************************************************************/
-/***************   List of range datasets   *******************/
+/************   List of row-selection requests   **************/
 /**************************************************************/
 static void
-ui_list_range_add(struct list_range **list, gal_data_t *dataset)
+ui_list_select_add(struct list_select **list, gal_data_t *col, int type)
 {
-  struct list_range *newnode;
+  struct list_select *newnode;
 
   errno=0;
   newnode=malloc(sizeof *newnode);
   if(newnode==NULL)
     error(EXIT_FAILURE, errno, "%s: allocating new node", __func__);
 
-  newnode->v=dataset;
+  newnode->col=col;
+  newnode->type=type;
   newnode->next=*list;
   *list=newnode;
 }
@@ -313,15 +317,19 @@ ui_list_range_add(struct list_range **list, gal_data_t 
*dataset)
 
 
 static gal_data_t *
-ui_list_range_pop(struct list_range **list)
+ui_list_select_pop(struct list_select **list, int *type)
 {
   gal_data_t *out=NULL;
-  struct list_range *tmp;
+  struct list_select *tmp;
   if(*list)
     {
+      /* Extract all the necessary components of the node. */
       tmp=*list;
-      out=tmp->v;
+      out=tmp->col;
+      *type=tmp->type;
       *list=tmp->next;
+
+      /* Delete the node. */
       free(tmp);
     }
   return out;
@@ -332,18 +340,19 @@ ui_list_range_pop(struct list_range **list)
 
 
 static void
-ui_list_range_reverse(struct list_range **list)
+ui_list_select_reverse(struct list_select **list)
 {
+  int thistype;
   gal_data_t *thisdata;
-  struct list_range *correctorder=NULL;
+  struct list_select *correctorder=NULL;
 
   /* Only do the reversal if there is more than one element. */
   if( *list && (*list)->next )
     {
       while(*list!=NULL)
         {
-          thisdata=ui_list_range_pop(list);
-          ui_list_range_add(&correctorder, thisdata);
+          thisdata=ui_list_select_pop(list, &thistype);
+          ui_list_select_add(&correctorder, thisdata, thistype);
         }
       *list=correctorder;
     }
@@ -354,14 +363,14 @@ ui_list_range_reverse(struct list_range **list)
 
 
 void
-ui_list_range_free(struct list_range *list, int freevalue)
+ui_list_select_free(struct list_select *list, int freevalue)
 {
-  struct list_range *tmp;
+  struct list_select *tmp;
   while(list!=NULL)
     {
       tmp=list->next;
       if(freevalue)
-        gal_data_free(list->v);
+        gal_data_free(list->col);
       free(list);
       list=tmp;
     }
@@ -644,7 +653,7 @@ ui_columns_prepare(struct tableparams *p)
    (starting from 0). So if we can read it as a number, we'll subtract one
    from it. */
 static size_t
-ui_check_range_sort_read_col_ind(char *string)
+ui_check_select_sort_read_col_ind(char *string)
 {
   size_t out;
   void *ptr=&out;
@@ -660,44 +669,58 @@ ui_check_range_sort_read_col_ind(char *string)
 
 
 
-/* See if the `--range' and `--sort' columns should also be added. */
+/* See if row selection or sorting needs any extra columns to be read. */
 static void
-ui_check_range_sort_before(struct tableparams *p, gal_list_str_t *lines,
-                           size_t *nrange, size_t *origoutncols,
-                           size_t *sortindout, size_t **rangeindout_out)
-{
-  size_t *rangeind=NULL;
-  size_t *rangeindout=NULL;
+ui_check_select_sort_before(struct tableparams *p, gal_list_str_t *lines,
+                            size_t *nselect, size_t *origoutncols,
+                            size_t *sortindout, size_t **selectindout_out,
+                            size_t **selecttypeout_out)
+{;
   gal_data_t *dtmp, *allcols;
   size_t sortind=GAL_BLANK_SIZE_T;
-  int tableformat, rangehasname=0;
   gal_list_sizet_t *tmp, *indexll;
   gal_list_str_t *stmp, *add=NULL;
-  size_t i, j, *s, *sf, allncols, numcols, numrows;
+  int tableformat, selecthasname=0;
+  size_t *selectind=NULL, *selecttype=NULL;
+  size_t *selectindout=NULL, *selecttypeout=NULL;
+  size_t i, j, k, *s, *sf, allncols, numcols, numrows;
+
+  /* Important note: these have to be in the same order as the `enum
+     select_types' in `main.h'. */
+  gal_data_t *select[SELECT_TYPE_NUMBER]={p->range, p->equal, p->notequal};
 
 
   /* Allocate necessary spaces. */
-  if(p->range)
+  if(p->selection)
     {
-      *nrange=gal_list_data_number(p->range);
-      rangeind=gal_pointer_allocate(GAL_TYPE_SIZE_T, *nrange, 0,
-                                    __func__, "rangeind");
-      rangeindout=gal_pointer_allocate(GAL_TYPE_SIZE_T, *nrange, 0,
-                                        __func__, "rangeindout");
-      sf=(s=rangeindout)+*nrange; do *s++=GAL_BLANK_SIZE_T; while(s<sf);
-      *rangeindout_out=rangeindout;
+      *nselect = ( gal_list_data_number(p->range)
+                   + gal_list_data_number(p->equal)
+                   + gal_list_data_number(p->notequal) );
+      selectind=gal_pointer_allocate(GAL_TYPE_SIZE_T, *nselect, 0,
+                                     __func__, "selectind");
+      selecttype=gal_pointer_allocate(GAL_TYPE_SIZE_T, *nselect, 0,
+                                      __func__, "selecttype");
+      selectindout=gal_pointer_allocate(GAL_TYPE_SIZE_T, *nselect, 0,
+                                        __func__, "selectindout");
+      selecttypeout=gal_pointer_allocate(GAL_TYPE_SIZE_T, *nselect, 0,
+                                         __func__, "selecttypeout");
+      sf=(s=selectindout)+*nselect; do *s++=GAL_BLANK_SIZE_T; while(s<sf);
+      *selectindout_out=selectindout;
+      *selecttypeout_out=selecttypeout;
     }
 
 
   /* See if the given columns are numbers or names. */
   i=0;
-  if(p->sort)  sortind  = ui_check_range_sort_read_col_ind(p->sort);
-  if(p->range)
-    for(dtmp=p->range;dtmp!=NULL;dtmp=dtmp->next)
-      {
-        rangeind[i] = ui_check_range_sort_read_col_ind(dtmp->name);
-        ++i;
-      }
+  if(p->sort)  sortind  = ui_check_select_sort_read_col_ind(p->sort);
+  if(p->selection)
+    for(k=0;k<SELECT_TYPE_NUMBER;++k)
+      for(dtmp=select[k];dtmp!=NULL;dtmp=dtmp->next)
+        {
+          selecttype[i] = k;
+          selectind[i] = ui_check_select_sort_read_col_ind(dtmp->name);
+          ++i;
+        }
 
 
   /* Get all the column information. */
@@ -713,21 +736,21 @@ ui_check_range_sort_before(struct tableparams *p, 
gal_list_str_t *lines,
           "number given to  `--sort' (%s)",
           gal_fits_name_save_as_string(p->filename, p->cp.hdu), numcols,
           p->sort);
-  if(p->range)
-    for(i=0;i<*nrange;++i)
-      if(rangeind[i]!=GAL_BLANK_SIZE_T && rangeind[i]>=numcols)
+  if(p->selection)
+    for(i=0;i<*nselect;++i)
+      if(selectind[i]!=GAL_BLANK_SIZE_T && selectind[i]>=numcols)
         error(EXIT_FAILURE, 0, "%s has %zu columns, less than the column "
-              "number given to  `--range' (%zu)",
+              "number given to  `--range', `--equal', or `--sort' (%zu)",
               gal_fits_name_save_as_string(p->filename, p->cp.hdu), numcols,
-              rangeind[i]);
+              selectind[i]);
 
 
   /* If any of the columns isn't specified by an index, go over the table
      information and set the index based on the names. */
-  if(p->range)
-    for(i=0;i<*nrange;++i)
-      if(rangeind[i]==GAL_BLANK_SIZE_T) { rangehasname=1; break; }
-  if( (p->sort && sortind==GAL_BLANK_SIZE_T) || rangehasname )
+  if(p->selection)
+    for(i=0;i<*nselect;++i)
+      if(selectind[i]==GAL_BLANK_SIZE_T) { selecthasname=1; break; }
+  if( (p->sort && sortind==GAL_BLANK_SIZE_T) || selecthasname )
     {
       /* For `--sort', go over all the columns if an index hasn't been set
          yet. If the input columns have a name, see if their names matches
@@ -737,46 +760,48 @@ ui_check_range_sort_before(struct tableparams *p, 
gal_list_str_t *lines,
           if( allcols[i].name && !strcasecmp(allcols[i].name, p->sort) )
             { sortind=i; break; }
 
-      /* Same for `--range'. Just note that here we may have multiple calls
-         to `--range'. It is thus important to loop over the values given
-         to range first, then loop over the column names from the start for
-         each new `--ran */
+      /* Same for the selection. Just note that here we may have multiple
+         calls. It is thus important to loop over the values given to range
+         first, then loop over the column names from the start for each new
+         `--ran */
       i=0;
-      if(p->range)
-        for(dtmp=p->range;dtmp!=NULL;dtmp=dtmp->next)
+      for(k=0;k<SELECT_TYPE_NUMBER;++k)
+        for(dtmp=select[k];dtmp!=NULL;dtmp=dtmp->next)
           {
-           if(rangeind[i]==GAL_BLANK_SIZE_T)
-             for(j=0;j<numcols;++j)
-               if( allcols[j].name
-                   && !strcasecmp(allcols[j].name, dtmp->name) )
-                 { rangeind[i]=j; break; }
-           ++i;
+            if(selectind[i]==GAL_BLANK_SIZE_T)
+              for(j=0;j<numcols;++j)
+                if( allcols[j].name
+                    && !strcasecmp(allcols[j].name, dtmp->name) )
+                  { selecttype[i]=k; selectind[i]=j; break; }
+            ++i;
           }
     }
 
 
-  /* Both columns must be good indexs now, if they aren't the user didn't
+  /* The columns must be good indexs now, if they don't the user didn't
      specify the name properly and the program must abort. */
   if( p->sort && sortind==GAL_BLANK_SIZE_T )
     error(EXIT_FAILURE, 0, "%s: no column named `%s' (value to `--sort') "
           "you can either specify a name or number",
           gal_fits_name_save_as_string(p->filename, p->cp.hdu), p->sort);
-  if(p->range)
+  if(p->selection)
     {
       i=0;
-      for(dtmp=p->range;dtmp!=NULL;dtmp=dtmp->next)
-        {
-          if(rangeind[i]==GAL_BLANK_SIZE_T)
-            error(EXIT_FAILURE, 0, "%s: no column named `%s' (value to "
-                  "`--range') you can either specify a name or number",
-                  gal_fits_name_save_as_string(p->filename, p->cp.hdu),
-                  dtmp->name);
-          ++i;
-        }
+      for(k=0;k<SELECT_TYPE_NUMBER;++k)
+        for(dtmp=select[k];dtmp!=NULL;dtmp=dtmp->next)
+          {
+            if(selectind[i]==GAL_BLANK_SIZE_T)
+              error(EXIT_FAILURE, 0, "%s: no column named `%s' (value to "
+                    "`--%s') you can either specify a name or number",
+                    gal_fits_name_save_as_string(p->filename, p->cp.hdu),
+                    dtmp->name,
+                    ( k==0?"range":( k==1?"equal":"notequal") ));
+            ++i;
+          }
     }
 
 
-  /* See which columns the user has asked for. */
+  /* See which columns the user has asked to output. */
   indexll=gal_table_list_of_indexs(p->columns, allcols, numcols,
                                    p->cp.searchin, p->cp.ignorecase,
                                    p->filename, p->cp.hdu, NULL);
@@ -788,47 +813,53 @@ ui_check_range_sort_before(struct tableparams *p, 
gal_list_str_t *lines,
   i=0;
   for(tmp=indexll; tmp!=NULL; tmp=tmp->next)
     {
-      if(p->sort  && *sortindout==GAL_BLANK_SIZE_T  && tmp->v == sortind)
+      if(p->sort && *sortindout==GAL_BLANK_SIZE_T  && tmp->v == sortind)
         *sortindout=i;
-      if(p->range)
-        for(j=0;j<*nrange;++j)
-          if(rangeindout[j]==GAL_BLANK_SIZE_T && tmp->v==rangeind[j])
-            rangeindout[j]=i;
+      if(p->selection)
+        for(j=0;j<*nselect;++j)
+          if(selectindout[j]==GAL_BLANK_SIZE_T && tmp->v==selectind[j])
+            {
+              selectindout[j]=i;
+              selecttypeout[j]=selecttype[j];
+            }
       ++i;
     }
 
 
-  /* See if any of the necessary columns (for `--sort' and `--range')
-     aren't requested as an output by the user. If there is any, such
-     columns, keep them here. */
-  if( p->sort && *sortindout==GAL_BLANK_SIZE_T )
-    { *sortindout=allncols++;  gal_list_str_add(&add, p->sort, 0); }
-
+  /* See if any of the sorting or selection columns aren't requested as an
+     output by the user. If there is, keep their new label.
 
-  /* Note that the sorting and range may be requested on the same
+     Note that the sorting and range may be requested on the same
      column. In this case, we don't want to read the same column twice. */
-  if(p->range)
+  if( p->sort && *sortindout==GAL_BLANK_SIZE_T )
+    { *sortindout=allncols++;  gal_list_str_add(&add, p->sort, 0); }
+  if(p->selection)
     {
       i=0;
-      for(dtmp=p->range;dtmp!=NULL;dtmp=dtmp->next)
-        {
-          if(*sortindout!=GAL_BLANK_SIZE_T
-             && rangeindout[i]==*sortindout)
-            rangeindout[i]=*sortindout;
-          else
-            {
-              if( rangeindout[i]==GAL_BLANK_SIZE_T )
-                {
-                  rangeindout[i]=allncols++;
-                  gal_list_str_add(&add, dtmp->name, 0);
-                }
-            }
-          ++i;
-        }
+      for(k=0;k<SELECT_TYPE_NUMBER;++k)
+        for(dtmp=select[k];dtmp!=NULL;dtmp=dtmp->next)
+          {
+            if(*sortindout!=GAL_BLANK_SIZE_T && selectindout[i]==*sortindout)
+              {
+                selecttypeout[i]=k;
+                selectindout[i]=*sortindout;
+              }
+            else
+              {
+                if( selectindout[i]==GAL_BLANK_SIZE_T )
+                  {
+                    selecttypeout[i]=k;
+                    selectindout[i]=allncols++;
+                    gal_list_str_add(&add, dtmp->name, 0);
+                  }
+              }
+            ++i;
+          }
     }
 
 
-  /* Add the possibly new set of columns to read. */
+  /* If any new (not requested by the user to output) columns must be read,
+     add them to the list of columns to read from the input file. */
   if(add)
     {
       gal_list_str_reverse(&add);
@@ -838,8 +869,9 @@ ui_check_range_sort_before(struct tableparams *p, 
gal_list_str_t *lines,
 
 
   /* Clean up. */
-  if(rangeind) free(rangeind);
   gal_list_sizet_free(indexll);
+  if(selectind) free(selectind);
+  if(selecttype) free(selecttype);
   gal_data_array_free(allcols, numcols, 0);
 }
 
@@ -848,80 +880,72 @@ ui_check_range_sort_before(struct tableparams *p, 
gal_list_str_t *lines,
 
 
 static void
-ui_check_range_sort_after(struct tableparams *p, size_t nrange,
-                          size_t origoutncols, size_t sortindout,
-                          size_t *rangeindout)
+ui_check_select_sort_after(struct tableparams *p, size_t nselect,
+                           size_t origoutncols, size_t sortindout,
+                           size_t *selectindout, size_t *selecttypeout)
 {
-  struct list_range *rtmp;
-  size_t i, j, *rangein=NULL;
-  gal_data_t *tmp, *last=NULL;
+  size_t i, j;
+  struct list_select *rtmp;
+  gal_data_t *tmp, *origlast=NULL;
 
   /* Allocate the necessary arrays. */
-  if(p->range)
-    {
-      rangein=gal_pointer_allocate(GAL_TYPE_UINT8, nrange, 0,
-                                   __func__, "rangein");
-      p->freerange=gal_pointer_allocate(GAL_TYPE_UINT8, nrange, 1,
-                                        __func__, "p->freerange");
-    }
+  if(p->selection)
+    p->freeselect=gal_pointer_allocate(GAL_TYPE_UINT8, nselect, 1,
+                                       __func__, "p->freeselect");
 
 
-  /* Set the proper pointers. For `rangecol' we'll need to do it separately
-     (because the orders can get confused).*/
+  /* Set some necessary pointers (last pointer of actual output table and
+     pointer to the sort column). */
   i=0;
   for(tmp=p->table; tmp!=NULL; tmp=tmp->next)
     {
-      if(i==origoutncols-1)           last=tmp;
+      if(i==origoutncols-1)        origlast=tmp;
       if(p->sort && i==sortindout) p->sortcol=tmp;
       ++i;
     }
 
 
-  /* Find the range columns. */
-  for(i=0;i<nrange;++i)
+  /* Since we can have several selection columns, we'll treat them
+     differently. */
+  for(i=0;i<nselect;++i)
     {
       j=0;
       for(tmp=p->table; tmp!=NULL; tmp=tmp->next)
         {
-          if(j==rangeindout[i])
+          if(j==selectindout[i])
             {
-              ui_list_range_add(&p->rangecol, tmp);
+              ui_list_select_add(&p->selectcol, tmp, selecttypeout[i]);
               break;
             }
           ++j;
         }
     }
-  ui_list_range_reverse(&p->rangecol);
+  ui_list_select_reverse(&p->selectcol);
 
 
-  /* Terminate the actual table where it should be terminated (by setting
-     `last->next' to NULL. */
-  last->next=NULL;
+  /* Terminate the desired output table where it should be terminated (by
+     setting `origlast->next' to NULL. */
+  origlast->next=NULL;
 
 
   /*  Also, remove any possibly existing `next' pointer for `sortcol' and
-     `rangecol'. */
+     `selectcol'. */
   if(p->sort && sortindout>=origoutncols)
     { p->sortcol->next=NULL;  p->freesort=1; }
   else p->sortin=1;
-  if(p->range)
+  if(p->selection)
     {
       i=0;
-      for(rtmp=p->rangecol;rtmp!=NULL;rtmp=rtmp->next)
+      for(rtmp=p->selectcol;rtmp!=NULL;rtmp=rtmp->next)
         {
-          if(rangeindout[i]>=origoutncols)
+          if(selectindout[i]>=origoutncols)
             {
-              rtmp->v->next=NULL;
-              p->freerange[i] = (rtmp->v==p->sortcol) ? 0 : 1;
+              rtmp->col->next=NULL;
+              p->freeselect[i] = (rtmp->col==p->sortcol) ? 0 : 1;
             }
-          else rangein[i]=1;
           ++i;
         }
     }
-
-
-  /* Clean up. */
-  if(rangein) free(rangein);
 }
 
 
@@ -934,9 +958,10 @@ ui_preparations(struct tableparams *p)
 {
   size_t *colmatch;
   gal_list_str_t *lines;
-  size_t nrange=0, origoutncols=0;
+  size_t nselect=0, origoutncols=0;
+  size_t sortindout=GAL_BLANK_SIZE_T;
   struct gal_options_common_params *cp=&p->cp;
-  size_t sortindout=GAL_BLANK_SIZE_T, *rangeindout=NULL;
+  size_t *selectindout=NULL, *selecttypeout=NULL;
 
   /* If there were no columns specified or the user has asked for
      information on the columns, we want the full set of columns. */
@@ -952,10 +977,14 @@ ui_preparations(struct tableparams *p)
   lines=gal_options_check_stdin(p->filename, p->cp.stdintimeout, "input");
 
 
-  /* If sort or range are given, see if we should read them also. */
-  if(p->range || p->sort)
-    ui_check_range_sort_before(p, lines, &nrange, &origoutncols, &sortindout,
-                               &rangeindout);
+  /* If any kind of row-selection is requested set `p->selection' to 1. */
+  p->selection = p->range || p->equal || p->notequal;
+
+  /* If row sorting or selection are requested, see if we should read any
+     extra columns. */
+  if(p->selection || p->sort)
+    ui_check_select_sort_before(p, lines, &nselect, &origoutncols, &sortindout,
+                                &selectindout, &selecttypeout);
 
 
   /* If we have any arithmetic operations, we need to make sure how many
@@ -975,11 +1004,11 @@ ui_preparations(struct tableparams *p)
   gal_list_str_free(lines, 1);
 
 
-  /* If the range and sort options are requested, keep them as separate
-     datasets. */
-  if(p->range || p->sort)
-    ui_check_range_sort_after(p, nrange, origoutncols, sortindout,
-                              rangeindout);
+  /* If row sorting or selection are requested, keep them as separate
+     datasets.*/
+  if(p->selection || p->sort)
+    ui_check_select_sort_after(p, nselect, origoutncols, sortindout,
+                               selectindout, selecttypeout);
 
 
   /* If there was no actual data in the file, then inform the user and
@@ -1018,7 +1047,8 @@ ui_preparations(struct tableparams *p)
 
   /* Clean up. */
   free(colmatch);
-  if(rangeindout) free(rangeindout);
+  if(selectindout) free(selectindout);
+  if(selecttypeout) free(selecttypeout);
 }
 
 
diff --git a/bin/table/ui.h b/bin/table/ui.h
index 37f61a3..7af1d1c 100644
--- a/bin/table/ui.h
+++ b/bin/table/ui.h
@@ -30,9 +30,18 @@ along with Gnuastro. If not, see 
<http://www.gnu.org/licenses/>.
 
 
 
+/* Option groups particular to this program. */
+enum program_args_groups
+{
+  UI_GROUP_OUTROWS = GAL_OPTIONS_GROUP_AFTER_COMMON,
+};
+
+
+
+
 /* Available letters for short options:
 
-   a b d e f g j k l m n p t u v x y z
+   a b d f g j k l m p t u v x y z
    A B C E G H J L O Q R X Y
 */
 enum option_keys_enum
@@ -44,6 +53,8 @@ enum option_keys_enum
   UI_KEY_INFORMATION     = 'i',
   UI_KEY_COLINFOINSTDOUT = 'O',
   UI_KEY_RANGE           = 'r',
+  UI_KEY_EQUAL           = 'e',
+  UI_KEY_NOTEQUAL        = 'n',
   UI_KEY_SORT            = 's',
   UI_KEY_DESCENDING      = 'd',
   UI_KEY_HEAD            = 'H',
@@ -61,7 +72,7 @@ void
 ui_read_check_inputs_setup(int argc, char *argv[], struct tableparams *p);
 
 void
-ui_list_range_free(struct list_range *list, int freevalue);
+ui_list_select_free(struct list_select *list, int freevalue);
 
 void
 ui_free_report(struct tableparams *p);
diff --git a/bin/warp/Makefile.am b/bin/warp/Makefile.am
index b7c0a7f..9759465 100644
--- a/bin/warp/Makefile.am
+++ b/bin/warp/Makefile.am
@@ -23,13 +23,17 @@
 AM_LDFLAGS = -L\$(top_builddir)/lib
 AM_CPPFLAGS = -I\$(top_srcdir)/bootstrapped/lib -I\$(top_srcdir)/lib
 
+if COND_NORPATH
+  MAYBE_NORPATH = $(CONFIG_LDADD)
+endif
 
 
 ## Program definition (name, linking, sources and headers)
 bin_PROGRAMS = astwarp
 
 ## Reason for linking with `libgnu' described in `bin/TEMPLATE/Makefile.am'.
-astwarp_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro
+astwarp_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro \
+                $(MAYBE_NORPATH)
 
 astwarp_SOURCES = main.c ui.c warp.c
 
diff --git a/configure.ac b/configure.ac
index cd8f11f..d3b48c6 100644
--- a/configure.ac
+++ b/configure.ac
@@ -50,7 +50,7 @@ AC_CONFIG_MACRO_DIRS([bootstrapped/m4])
 
 # Library version, see the GNU Libtool manual ("Library interface versions"
 # section for the exact definition of each) for
-GAL_CURRENT=8
+GAL_CURRENT=9
 GAL_REVISION=0
 GAL_AGE=0
 GAL_LT_VERSION="${GAL_CURRENT}:${GAL_REVISION}:${GAL_AGE}"
@@ -251,82 +251,6 @@ AC_MSG_RESULT( $path_warning )
 
 
 
-# GAL_LIBCHECK([$LIBNAME], [libname], [-lname])
-# ---------------------------------------------
-#
-# Custom macro to correct LIBS and LD_LIBRARY_PATH when necessary.
-#   1) LIB<NAME> output of AC_LIB_HAVE_LINKFLAGS (e.g., `LIBGSL').
-#   2) lib<name> of the library (e.g., `libgsl').
-#   3) Full list of libraries to link with (e.g., `-lgsl -lgslcblas').
-#
-# To understand this, have in mind that when the library can be linked in
-# the standard search path without conflict, the output of
-# `AC_LIB_HAVE_LINKFLAGS' is going to be the actual linking command (like
-# `-lgsl -lgslcblas'). Otherwise, it will actually return the absolute
-# shared library address (for example `/some/directory/libgsl.so').
-#
-# So when `AC_LIB_HAVE_LINKFLAGS' returns a string containing `lib<name>',
-# it has found the shared library and we must add its directory to
-# LD_LIBRARY_PATH.
-AC_DEFUN([GAL_LIBCHECK],
-[
-  n=`AS_ECHO([$1]) | grep $2`;
-  AS_IF([test "x$n" = x],
-        [LIBS="$1 $LIBS"],
-        [
-          # Go through all the tokens of `AC_LIB_HAVE_LINKFLAGS'.
-          for token in $1; do
-            d="";
-
-            # See if this token has `lib<name' in it. If it does, then
-            # we'll have to extract the directory.
-            a=`AS_ECHO([$token]) | grep $2`
-            AS_IF([test "x$a" = x], [],
-                  [
-                    # Use `lib<name>' as a delimiter to extract the
-                    # library (the first token will be the library's
-                    # directory).
-                    n=`AS_ECHO([$token]) | sed 's/\/'$2'/ /g'`
-                    for b in $n; do d=$b; break; done
-                  ])
-
-             # If a directory was found, then stop parsing tokens.
-             AS_IF([test "x$d" = x], [], [break])
-          done;
-
-          # Add the necessary linking flags to LIBS with the proper `-L'
-          # (when necessary) to find them.
-          AS_IF( [ test "x$d" = x ],
-                 [LIBS="$3 $LIBS"],
-                 [LIBS="-L$d $3 $LIBS"] )
-
-          # Add the directory to LD_LIBRARY_PATH (if necessary). Go
-          # through all the directories in LD_LIBRARY_PATH and if the
-          # library doesn't exist add it to the end.
-          nldp="";
-          exists=0;
-          for i in `AS_ECHO(["$LD_LIBRARY_PATH"]) | sed "s/:/ /g"`; do
-            AS_IF([test "x$d" = "x$i"],[exists=1])
-            AS_IF([test "x$nldp" = x],[nldp=$i],[nldp="$nldp:$i"])
-          done
-
-          # If the directory doesn't already exist in LD_LIBRARY_PATH, then
-          # add it.
-          AS_IF([test $exists = 0],
-                [
-                  nldp="$nldp:$d"
-                  AS_IF([test "x$ldlibpathnew" = x],
-                        [ldlibpathnew="$d"],
-                        [ldlibpathnew="$ldlibpathnew:$d"])
-                ])
-          LD_LIBRARY_PATH=$nldp
-        ])
-])
-
-
-
-
-
 # Libraries
 # ---------
 #
@@ -416,14 +340,19 @@ AS_IF([test "x$LIBWCS" = x],
       [LDADD="$LTLIBWCS $LDADD"; LIBS="$LIBWCS $LIBS"])
 
 
-AC_LIB_HAVE_LINKFLAGS([jpeg], [], [
+AC_ARG_WITH([libjpeg],
+            [AS_HELP_STRING([--without-libjpeg],
+                            [disable support for libjpeg])],
+            [], [with_libjpeg=yes])
+AS_IF([test "x$with_libjpeg" != xno],
+      [ AC_LIB_HAVE_LINKFLAGS([jpeg], [], [
 #include <stdio.h>
 #include <stdlib.h>
 #include <jpeglib.h>
 void junk(void) {
   struct jpeg_decompress_struct cinfo;
   jpeg_create_decompress(&cinfo);
-}  ])
+} ]) ])
 AS_IF([test "x$LIBJPEG" = x],
       [missing_optional_lib=yes; has_libjpeg=no; anywarnings=yes],
       [LDADD="$LTLIBJPEG $LDADD"; LIBS="$LIBJPEG $LIBS"])
@@ -435,14 +364,18 @@ AM_CONDITIONAL([COND_HASLIBJPEG], [test "x$has_libjpeg" = 
"xyes"])
 # the LZMA library. But if libtiff hasn't been linked with it and its
 # present, there is no problem, the linker will just pass over it. So we
 # don't need to stop the build if this fails.
-AC_LIB_HAVE_LINKFLAGS([lzma], [], [#include <lzma.h>])
+AC_ARG_WITH([libtiff],
+            [AS_HELP_STRING([--without-libtiff],
+                            [disable support for libtiff])],
+            [], [with_libtiff=yes])
+AS_IF([test "x$with_libtiff" != xno],
+      [ AC_LIB_HAVE_LINKFLAGS([lzma], [], [#include <lzma.h>])
+        AC_LIB_HAVE_LINKFLAGS([tiff], [], [
+#include <tiffio.h>
+void junk(void) {TIFF *tif=TIFFOpen("junk", "r");} ])
+      ])
 AS_IF([test "x$LIBLZMA" = x], [],
       [LDADD="$LTLIBLZMA $LDADD"; LIBS="$LIBLZMA $LIBS"])
-
-AC_LIB_HAVE_LINKFLAGS([tiff], [], [
-#include <tiffio.h>
-void junk(void) {TIFF *tif=TIFFOpen("junk", "r");}
-])
 AS_IF([test "x$LIBTIFF" = x],
       [missing_optional_lib=yes; has_libtiff=no; anywarnings=yes],
       [LDADD="$LTLIBTIFF $LDADD"; LIBS="$LIBTIFF $LIBS"])
@@ -451,10 +384,15 @@ AM_CONDITIONAL([COND_HASLIBTIFF], [test "x$has_libtiff" = 
"xyes"])
 
 # Check libgit2. Note that very old versions of libgit2 don't have the
 # `git_libgit2_init' function.
-AC_LIB_HAVE_LINKFLAGS([git2], [], [
+AC_ARG_WITH([libgit2],
+            [AS_HELP_STRING([--without-libgit2],
+                            [disable support for libgit2])],
+            [], [with_libgit2=yes])
+AS_IF([test "x$with_libgit2" != xno],
+      [ AC_LIB_HAVE_LINKFLAGS([git2], [], [
 #include <git2.h>
-void junk(void) {git_libgit2_init();}
-])
+void junk(void) {git_libgit2_init();} ])
+      ])
 AS_IF([test "x$LIBGIT2" = x],
       [missing_optional_lib=yes; has_libgit2=0],
       [LDADD="$LTLIBGIT2 $LDADD"; LIBS="$LIBGIT2 $LIBS"])
@@ -945,6 +883,7 @@ AM_CONDITIONAL([COND_WARP],        [test $enable_warp = 
yes])
 # linking flags and put them in the Makefiles.
 LIBS="$orig_LIBS"
 AC_SUBST(CONFIG_LDADD, [$LDADD])
+AM_CONDITIONAL([COND_NORPATH], [test "x$enable_rpath" = "xno"])
 AS_ECHO(["linking flags (LDADD) ... $LDADD"])
 
 
@@ -1042,23 +981,26 @@ AS_IF([test x$enable_guide_message = xyes],
         AS_IF([test "x$has_libjpeg" = "xno"],
               [dependency_notice=yes
                AS_ECHO(["  - libjpeg (http://ijg.org), could not be linked 
with in your library"])
-               AS_ECHO(["    search path. If JPEG inputs/outputs are 
requested, the respective"])
-               AS_ECHO(["    tool will inform you and abort with an error."])
+               AS_ECHO(["    search path, or is manually disabled. If JPEG 
inputs/outputs are"])
+               AS_ECHO(["    requested, the respective tool will inform you 
and abort with an"])
+               AS_ECHO(["    error."])
                AS_ECHO([]) ])
 
         AS_IF([test "x$has_libtiff" = "xno"],
               [dependency_notice=yes
                AS_ECHO(["  - libtiff (http://libtiff.maptools.org), could not 
be linked with in"])
-               AS_ECHO(["    your library search path. If TIFF inputs/outputs 
are requested, the"])
-               AS_ECHO(["    respective tool will inform you and abort with an 
error."])
+               AS_ECHO(["    your library search path, or is manually 
disabled. If TIFF"])
+               AS_ECHO(["    inputs/outputs are requested, the respective tool 
will inform"])
+               AS_ECHO(["    you and abort with an error."])
                AS_ECHO([]) ])
 
         AS_IF([test "x$has_libgit2" = "x0"],
               [dependency_notice=yes
                AS_ECHO(["  - libgit2 (https://libgit2.org), could not be 
linked with in your"])
-               AS_ECHO(["    library search path. When present, Git's describe 
output will be"])
-               AS_ECHO(["    stored in the output files if Gnuastro's programs 
were called"])
-               AS_ECHO(["    within a Gitversion controlled directory to help 
in reproducibility."])
+               AS_ECHO(["    library search path, or is manually disabled. 
When present, Git's"])
+               AS_ECHO(["    describe output will be stored in the output 
files if Gnuastro's"])
+               AS_ECHO(["    programs were called within a Gitversion 
controlled directory to"])
+               AS_ECHO(["    help in reproducibility."])
                AS_ECHO([]) ])
 
         AS_IF([test "x$usable_libtool" = "xno"],
@@ -1121,23 +1063,6 @@ AS_IF([test x$enable_guide_message = xyes],
                AS_ECHO(["    installing Gnuastro to learn more about PATH:"])
                AS_ECHO(["        $ info gnuastro \"Installation directory\""])
                AS_ECHO([]) ])
-
-        # Notice about run-time linking.
-        AS_IF([test "x$nldpath" = x], [],
-              [AS_ECHO(["  - After installation, to run Gnuastro's programs, 
your run-time"])
-               AS_ECHO(["    link path (LD_LIBRARY_PATH) needs to contain the 
following "])
-               AS_ECHO(["    directory(s):"])
-               AS_ECHO(["        $nldpath"])
-               AS_ECHO(["    If there is more than one directory, they are 
separated with a"])
-               AS_ECHO(["    colon (':'). You can check the current value 
with:"])
-               AS_ECHO(["        echo \$LD_LIBRARY_PATH"])
-               AS_ECHO(["    If not present, add this line in your shell's 
startup script"])
-               AS_ECHO(["    (for example '~/.bashrc'):"])
-               AS_ECHO(["        export 
LD_LIBRARY_PATH=\"\$LD_LIBRARY_PATH:$nldpath\""])
-               AS_ECHO(["    This worning won't cause any problems during the 
rest of Gnuastro's"])
-               AS_ECHO(["    build and installation. But you'll need it later, 
when you are using"])
-               AS_ECHO(["    Gnuastro."])
-               AS_ECHO([]) ])
       ]
      )
   AS_ECHO(["To build Gnuastro $PACKAGE_VERSION, please run:"])
diff --git a/doc/announce-acknowledge.txt b/doc/announce-acknowledge.txt
index 241dcec..1f9dc27 100644
--- a/doc/announce-acknowledge.txt
+++ b/doc/announce-acknowledge.txt
@@ -1,21 +1,6 @@
 Alphabetically ordered list to acknowledge in the next release.
 
-Hamed Altafi
-Roberto Baena Gallé
-Zahra Bagheri
-Leindert Boogaard
-Bruno Haible
-Raul Infante-Sainz
-Lee Kelvin
-Elham Saremi
-Zahra Sharbaf
-David Valls-Gabaud
-Michael Wilkinson
-
-
-
-
-
+Raúl Infante Sainz
 
 
 
diff --git a/doc/genauthors b/doc/genauthors
index 078fbf1..42806e9 100755
--- a/doc/genauthors
+++ b/doc/genauthors
@@ -44,19 +44,27 @@ if [ -d $1/.git ]; then
     # directory. The original `.mailmap' is in the `TOP_SRCDIR', so even
     # when the source and build directories are the same, there is no
     # problem.
-    ln -s $1/.mailmap .mailmap
+    #
+    # But in case `.mailmap' already exists (for example the script is run
+    # in the top source directory not from the `doc' directory, or if a
+    # symbolic link was already created), we won't do any copying.
+    if [ -e .mailmap ]; then keepmailmap=1;
+    else                     keepmailmap=0; ln -s $1/.mailmap .mailmap;
+    fi
 
     # Do NOT test if authors.texi is newer than ../.git.  In some cases the
     # list of authors is created empty when running make in top directory
     # (in particular "make -jN" with N > 1), so authors.texi needs to be
     # recreated anyway.
     git --git-dir=$1/.git shortlog --numbered --summary --email --no-merges \
-        | sed -e 's/</ /' -e 's/>/ /' -e 's/@/@@/' -e "s/è/@\`e/"           \
-        | awk '{printf "%s %s (%s, %s)@*\n", $2, $3, $4, $1}'               \
+        | sed -e 's/</ /' -e 's/>/ /' -e 's/@/@@/' \
+              -e "s/è/@\`e/" -e "s/é/@\'e/" \
+        | awk '{for(i=2;i<NF;++i) printf("%s ", $i); \
+                printf("(%s, %s)@*\n", $NF, $1)}' \
         > $1/doc/authors.texi
 
-    # Clean up:
-    rm .mailmap
+    # Clean up (if necessary)
+    if [ $keepmailmap = 0 ]; then rm .mailmap; fi
 
     # Check if the authors.texi file was actually written:
     if [ ! -s $1/doc/authors.texi ]; then
diff --git a/doc/gnuastro.en.html b/doc/gnuastro.en.html
index eba3998..5da54f4 100644
--- a/doc/gnuastro.en.html
+++ b/doc/gnuastro.en.html
@@ -85,9 +85,9 @@ for entertaining and easy to read real world examples of using
 
 <p>
   The current stable release
-  is <a href="http://ftp.gnu.org/gnu/gnuastro/gnuastro-0.9.tar.gz";>Gnuastro
-  0.9</a> (April 17th, 2019).
-  Use <a href="http://ftpmirror.gnu.org/gnuastro/gnuastro-0.9.tar.gz";>a
+  is <a href="http://ftp.gnu.org/gnu/gnuastro/gnuastro-0.10.tar.gz";>Gnuastro
+  0.10</a> (August 3rd, 2019).
+  Use <a href="http://ftpmirror.gnu.org/gnuastro/gnuastro-0.10.tar.gz";>a
   mirror</a> if possible.
 
   <!-- Comment the test release notice when the test release is not more
@@ -98,7 +98,7 @@ for entertaining and easy to read real world examples of using
   To stay up to date, please subscribe.</p>
 
 <p>For details of the significant changes in this release, please see the
-  <a 
href="https://git.savannah.gnu.org/cgit/gnuastro.git/plain/NEWS?id=gnuastro_v0.9";>NEWS</a>
+  <a 
href="https://git.savannah.gnu.org/cgit/gnuastro.git/plain/NEWS?id=gnuastro_v0.10";>NEWS</a>
   file.</p>
 
 <p>The
diff --git a/doc/gnuastro.fr.html b/doc/gnuastro.fr.html
index ee9f654..8ba8ceb 100644
--- a/doc/gnuastro.fr.html
+++ b/doc/gnuastro.fr.html
@@ -85,15 +85,15 @@ h3 { clear: both; }
 <h3 id="download">Téléchargement</h3>
 
 <p>La version stable actuelle
-  est <a href="https://ftp.gnu.org/gnu/gnuastro/gnuastro-0.9.tar.gz";>Gnuastro
-  0.9</a> (sortie le 28 avril
-  2019). Utilisez <a 
href="https://ftpmirror.gnu.org/gnuastro/gnuastro-0.9.tar.gz";>un
+  est <a href="https://ftp.gnu.org/gnu/gnuastro/gnuastro-0.10.tar.gz";>Gnuastro
+  0.10</a> (sortie le 3 août
+  2019). Utilisez <a 
href="https://ftpmirror.gnu.org/gnuastro/gnuastro-0.10.tar.gz";>un
   miroir</a> si possible.  <br />Les nouvelles publications sont annoncées
   sur <a 
href="https://lists.gnu.org/mailman/listinfo/info-gnuastro";>info-gnuastro</a>.
   Abonnez-vous pour rester au courant.</p>
 
 <p>Les changements importants sont décrits dans le
-  fichier <a 
href="https://git.savannah.gnu.org/cgit/gnuastro.git/plain/NEWS?id=gnuastro_v0.9";>
+  fichier <a 
href="https://git.savannah.gnu.org/cgit/gnuastro.git/plain/NEWS?id=gnuastro_v0.10";>
   NEWS</a>.</p>
 
 <p>Le lien
diff --git a/doc/gnuastro.texi b/doc/gnuastro.texi
index cdba66c..fd62f19 100644
--- a/doc/gnuastro.texi
+++ b/doc/gnuastro.texi
@@ -1,4 +1,12 @@
 \input texinfo @c -*-texinfo-*-
+
+@c ONE SENTENCE PER LINE
+@c ---------------------
+@c For main printed text in this file, to allow easy tracking of history
+@c with Git, we are following a one-sentence-per-line convention.
+@c
+@c Since the manual is long, this is being done gradually from the start.
+
 @c %**start of header
 @setfilename gnuastro.info
 @settitle GNU Astronomy Utilities
@@ -25,19 +33,14 @@
 
 @c Copyright information:
 @copying
-This book documents version @value{VERSION} of the GNU Astronomy Utilities
-(Gnuastro). Gnuastro provides various programs and libraries for
-astronomical data manipulation and analysis.
+This book documents version @value{VERSION} of the GNU Astronomy Utilities 
(Gnuastro).
+Gnuastro provides various programs and libraries for astronomical data 
manipulation and analysis.
 
 Copyright @copyright{} 2015-2019 Free Software Foundation, Inc.
 
 @quotation
-Permission is granted to copy, distribute and/or modify this document under
-the terms of the GNU Free Documentation License, Version 1.3 or any later
-version published by the Free Software Foundation; with no Invariant
-Sections, no Front-Cover Texts, and no Back-Cover Texts.  A copy of the
-license is included in the section entitled ``GNU Free Documentation
-License''.
+Permission is granted to copy, distribute and/or modify this document under 
the terms of the GNU Free Documentation License, Version 1.3 or any later 
version published by the Free Software Foundation; with no Invariant Sections, 
no Front-Cover Texts, and no Back-Cover Texts.
+A copy of the license is included in the section entitled ``GNU Free 
Documentation License''.
 @end quotation
 @end copying
 
@@ -149,15 +152,8 @@ commits):
 @*
 @*
 @*
-For myself, I am interested in science and in philosophy only because I
-want to learn something about the riddle of the world in which we live, and
-the riddle of man's knowledge of that world. And I believe that only a
-revival of interest in these riddles can save the sciences and philosophy
-from narrow specialization and from an obscurantist faith in the expert's
-special skill, and in his personal knowledge and authority; a faith that so
-well fits our `post-rationalist' and `post-critical' age, proudly dedicated
-to the destruction of the tradition of rational philosophy, and of rational
-thought itself.
+For myself, I am interested in science and in philosophy only because I want 
to learn something about the riddle of the world in which we live, and the 
riddle of man's knowledge of that world.
+And I believe that only a revival of interest in these riddles can save the 
sciences and philosophy from narrow specialization and from an obscurantist 
faith in the expert's special skill, and in his personal knowledge and 
authority; a faith that so well fits our `post-rationalist' and `post-critical' 
age, proudly dedicated to the destruction of the tradition of rational 
philosophy, and of rational thought itself.
 @author Karl Popper. The logic of scientific discovery. 1959.
 @end quotation
 
@@ -184,13 +180,9 @@ thought itself.
 @insertcopying
 
 @ifhtml
-To navigate easily in this web page, you can use the @code{Next},
-@code{Previous}, @code{Up} and @code{Contents} links in the top and
-bottom of each page. @code{Next} and @code{Previous} will take you to
-the next or previous topic in the same level, for example from chapter
-1 to chapter 2 or vice versa. To go to the sections or subsections,
-you have to click on the menu entries that are there when ever a
-sub-component to a title is present.
+To navigate easily in this web page, you can use the @code{Next}, 
@code{Previous}, @code{Up} and @code{Contents} links in the top and bottom of 
each page.
+@code{Next} and @code{Previous} will take you to the next or previous topic in 
the same level, for example from chapter 1 to chapter 2 or vice versa.
+To go to the sections or subsections, you have to click on the menu entries 
that are there when ever a sub-component to a title is present.
 @end ifhtml
 
 @end ifnottex
@@ -264,7 +256,7 @@ General program usage tutorial
 * NoiseChisel optimization for storage::  Dramatically decrease output's 
volume.
 * Segmentation and making a catalog::  Finding true peaks and creating a 
catalog.
 * Working with catalogs estimating colors::  Estimating colors using the 
catalogs.
-* Aperture photomery::          Doing photometry on a fixed aperture.
+* Aperture photometry::         Doing photometry on a fixed aperture.
 * Finding reddest clumps and visual inspection::  Selecting some targets and 
inspecting them.
 * Citing and acknowledging Gnuastro::  How to cite and acknowledge Gnuastro in 
your papers.
 
@@ -629,7 +621,7 @@ Gnuastro library
 * Convolution functions::       Library functions to do convolution.
 * Interpolation::               Interpolate (over blank values possibly).
 * Git wrappers::                Wrappers for functions in libgit2.
-* Spectral lines library::
+* Spectral lines library::      Functions for operating on Spectral lines.
 * Cosmology library::           Cosmological calculations.
 
 Multithreaded programming (@file{threads.h})
@@ -728,34 +720,19 @@ SAO ds9
 
 @cindex GNU coding standards
 @cindex GNU Astronomy Utilities (Gnuastro)
-GNU Astronomy Utilities (Gnuastro) is an official GNU package consisting of
-separate programs and libraries for the manipulation and analysis of
-astronomical data. All the programs share the same basic command-line user
-interface for the comfort of both the users and developers. Gnuastro is
-written to comply fully with the GNU coding standards so it integrates
-finely with the GNU/Linux operating system. This also enables astronomers
-to expect a fully familiar experience in the source code, building,
-installing and command-line user interaction that they have seen in all the
-other GNU software that they use. The official and always up to date
-version of this book (or manual) is freely available under @ref{GNU Free
-Doc. License} in various formats (PDF, HTML, plain text, info, and as its
-Texinfo source) at @url{http://www.gnu.org/software/gnuastro/manual/}.
-
-For users who are new to the GNU/Linux environment, unless otherwise
-specified most of the topics in @ref{Installation} and @ref{Common program
-behavior} are common to all GNU software, for example installation,
-managing command-line options or getting help (also see @ref{New to
-GNU/Linux?}). So if you are new to this empowering environment, we
-encourage you to go through these chapters carefully. They can be a
-starting point from which you can continue to learn more from each
-program's own manual and fully benefit from and enjoy this wonderful
-environment. Gnuastro also comes with a large set of libraries, so you can
-write your own programs using Gnuastro's building blocks, see @ref{Review
-of library fundamentals} for an introduction.
-
-In Gnuastro, no change to any program or library will be committed to its
-history, before it has been fully documented here first. As discussed in
-@ref{Science and its tools} this is a founding principle of the Gnuastro.
+GNU Astronomy Utilities (Gnuastro) is an official GNU package consisting of 
separate programs and libraries for the manipulation and analysis of 
astronomical data.
+All the programs share the same basic command-line user interface for the 
comfort of both the users and developers.
+Gnuastro is written to comply fully with the GNU coding standards so it 
integrates finely with the GNU/Linux operating system.
+This also enables astronomers to expect a fully familiar experience in the 
source code, building, installing and command-line user interaction that they 
have seen in all the other GNU software that they use.
+The official and always up to date version of this book (or manual) is freely 
available under @ref{GNU Free Doc. License} in various formats (PDF, HTML, 
plain text, info, and as its Texinfo source) at 
@url{http://www.gnu.org/software/gnuastro/manual/}.
+
+For users who are new to the GNU/Linux environment, unless otherwise specified 
most of the topics in @ref{Installation} and @ref{Common program behavior} are 
common to all GNU software, for example installation, managing command-line 
options or getting help (also see @ref{New to GNU/Linux?}).
+So if you are new to this empowering environment, we encourage you to go 
through these chapters carefully.
+They can be a starting point from which you can continue to learn more from 
each program's own manual and fully benefit from and enjoy this wonderful 
environment.
+Gnuastro also comes with a large set of libraries, so you can write your own 
programs using Gnuastro's building blocks, see @ref{Review of library 
fundamentals} for an introduction.
+
+In Gnuastro, no change to any program or library will be committed to its 
history, before it has been fully documented here first.
+As discussed in @ref{Science and its tools} this is a founding principle of 
the Gnuastro.
 
 @menu
 * Quick start::                 A quick start to installation.
@@ -783,32 +760,15 @@ history, before it has been fully documented here first. 
As discussed in
 @cindex GNU Tar
 @cindex Uncompress source
 @cindex Source, uncompress
-The latest official release tarball is always available as
-@url{http://ftp.gnu.org/gnu/gnuastro/gnuastro-latest.tar.gz,
-@file{gnuastro-latest.tar.gz}}. For better compression (faster download),
-and robust archival features, an @url{http://www.nongnu.org/lzip/lzip.html,
-Lzip} compressed tarball is also available at
-@url{http://ftp.gnu.org/gnu/gnuastro/gnuastro-latest.tar.lz,
-@file{gnuastro-latest.tar.lz}}, see @ref{Release tarball} for more details
-on the tarball release@footnote{The Gzip library and program are commonly
-available on most systems. However, Gnuastro recommends Lzip as described
-above and the beta-releases are also only distributed in @file{tar.lz}. You
-can download and install Lzip's source (in @file{.tar.gz} format) from its
-webpage and follow the same process as below: Lzip has no dependencies, so
-simply decompress, then run @command{./configure}, @command{make},
-@command{sudo make install}.}.
-
-
-Let's assume the downloaded tarball is in the @file{TOPGNUASTRO}
-directory. The first two commands below can be used to decompress the
-source. If you download @file{tar.lz} and your Tar implementation doesn't
-recognize Lzip (the second command fails), run the third and fourth
-lines@footnote{In case Tar doesn't directly uncompress your @file{.tar.lz}
-tarball, you can merge the separate calls to Lzip and Tar (shown in the
-main body of text) into one command by directly piping the output of Lzip
-into Tar with a command like this: @command{$ lzip -cd gnuastro-0.5.tar.lz
-| tar -xf -}}. Note that lines starting with @code{##} don't need to be
-typed.
+The latest official release tarball is always available as 
@url{http://ftp.gnu.org/gnu/gnuastro/gnuastro-latest.tar.gz, 
@file{gnuastro-latest.tar.gz}}.
+For better compression (faster download), and robust archival features, an 
@url{http://www.nongnu.org/lzip/lzip.html, Lzip} compressed tarball is also 
available at @url{http://ftp.gnu.org/gnu/gnuastro/gnuastro-latest.tar.lz, 
@file{gnuastro-latest.tar.lz}}, see @ref{Release tarball} for more details on 
the tarball release@footnote{The Gzip library and program are commonly 
available on most systems.
+However, Gnuastro recommends Lzip as described above and the beta-releases are 
also only distributed in @file{tar.lz}.
+You can download and install Lzip's source (in @file{.tar.gz} format) from its 
webpage and follow the same process as below: Lzip has no dependencies, so 
simply decompress, then run @command{./configure}, @command{make}, 
@command{sudo make install}.}.
+
+Let's assume the downloaded tarball is in the @file{TOPGNUASTRO} directory.
+The first two commands below can be used to decompress the source.
+If you download @file{tar.lz} and your Tar implementation doesn't recognize 
Lzip (the second command fails), run the third and fourth lines@footnote{In 
case Tar doesn't directly uncompress your @file{.tar.lz} tarball, you can merge 
the separate calls to Lzip and Tar (shown in the main body of text) into one 
command by directly piping the output of Lzip into Tar with a command like 
this: @command{$ lzip -cd gnuastro-0.5.tar.lz | tar -xf -}}.
+Note that lines starting with @code{##} don't need to be typed.
 
 @example
 ## Go into the download directory.
@@ -822,13 +782,9 @@ $ lzip -d gnuastro-latest.tar.lz
 $ tar xf gnuastro-latest.tar
 @end example
 
-Gnuastro has three mandatory dependencies and some optional dependencies
-for extra functionality, see @ref{Dependencies} for the full list. In
-@ref{Dependencies from package managers} we have prepared the command to
-easily install Gnuastro's dependencies using the package manager of some
-operating systems. When the mandatory dependencies are ready, you can
-configure, compile, check and install Gnuastro on your system with the
-following commands.
+Gnuastro has three mandatory dependencies and some optional dependencies for 
extra functionality, see @ref{Dependencies} for the full list.
+In @ref{Dependencies from package managers} we have prepared the command to 
easily install Gnuastro's dependencies using the package manager of some 
operating systems.
+When the mandatory dependencies are ready, you can configure, compile, check 
and install Gnuastro on your system with the following commands.
 
 @example
 $ cd gnuastro-X.X                  # Replace X.X with version number.
@@ -839,16 +795,12 @@ $ sudo make install
 @end example
 
 @noindent
-See @ref{Known issues} if you confront any complications. For each program
-there is an `Invoke ProgramName' sub-section in this book which explains
-how the programs should be run on the command-line (for example
-@ref{Invoking asttable}). You can read the same section on the command-line
-by running @command{$ info astprogname} (for example @command{info
-asttable}). The `Invoke ProgramName' sub-section starts with a few examples
-of each program and goes on to explain the invocation details. See
-@ref{Getting help} for all the options you have to get help. In
-@ref{Tutorials} some real life examples of how these programs might be used
-are given.
+See @ref{Known issues} if you confront any complications.
+For each program there is an `Invoke ProgramName' sub-section in this book 
which explains how the programs should be run on the command-line (for example 
@ref{Invoking asttable}).
+You can read the same section on the command-line by running @command{$ info 
astprogname} (for example @command{info asttable}).
+The `Invoke ProgramName' sub-section starts with a few examples of each 
program and goes on to explain the invocation details.
+See @ref{Getting help} for all the options you have to get help.
+In @ref{Tutorials} some real life examples of how these programs might be used 
are given.
 
 
 
@@ -860,136 +812,75 @@ are given.
 @node Science and its tools, Your rights, Quick start, Introduction
 @section Science and its tools
 
-History of science indicates that there are always inevitably unseen
-faults, hidden assumptions, simplifications and approximations in all
-our theoretical models, data acquisition and analysis techniques. It
-is precisely these that will ultimately allow future generations to
-advance the existing experimental and theoretical knowledge through
-their new solutions and corrections.
-
-In the past, scientists would gather data and process them individually to
-achieve an analysis thus having a much more intricate knowledge of the data
-and analysis. The theoretical models also required little (if any)
-simulations to compare with the data. Today both methods are becoming
-increasingly more dependent on pre-written software. Scientists are
-dissociating themselves from the intricacies of reducing raw observational
-data in experimentation or from bringing the theoretical models to life in
-simulations. These `intricacies' are precisely those unseen faults, hidden
-assumptions, simplifications and approximations that define scientific
-progress.
+History of science indicates that there are always inevitably unseen faults, 
hidden assumptions, simplifications and approximations in all our theoretical 
models, data acquisition and analysis techniques.
+It is precisely these that will ultimately allow future generations to advance 
the existing experimental and theoretical knowledge through their new solutions 
and corrections.
+
+In the past, scientists would gather data and process them individually to 
achieve an analysis thus having a much more intricate knowledge of the data and 
analysis.
+The theoretical models also required little (if any) simulations to compare 
with the data.
+Today both methods are becoming increasingly more dependent on pre-written 
software.
+Scientists are dissociating themselves from the intricacies of reducing raw 
observational data in experimentation or from bringing the theoretical models 
to life in simulations.
+These `intricacies' are precisely those unseen faults, hidden assumptions, 
simplifications and approximations that define scientific progress.
 
 @quotation
 @cindex Anscombe F. J.
-Unfortunately, most persons who have recourse to a computer for
-statistical analysis of data are not much interested either in
-computer programming or in statistical method, being primarily
-concerned with their own proper business. Hence the common use of
-library programs and various statistical packages. ... It's time that
-was changed.
-@author F. J. Anscombe. The American Statistician, Vol. 27, No. 1. 1973
+Unfortunately, most persons who have recourse to a computer for statistical 
analysis of data are not much interested either in computer programming or in 
statistical method, being primarily concerned with their own proper business.
+Hence the common use of library programs and various statistical packages. ... 
It's time that was changed.
+@author F.J. Anscombe. The American Statistician, Vol. 27, No. 1. 1973
 @end quotation
 
 @cindex Anscombe's quartet
 @cindex Statistical analysis
-@url{http://en.wikipedia.org/wiki/Anscombe%27s_quartet,Anscombe's quartet}
-demonstrates how four data sets with widely different shapes (when plotted)
-give nearly identical output from standard regression techniques. Anscombe
-uses this (now famous) quartet, which was introduced in the paper quoted
-above, to argue that ``@emph{Good statistical analysis is not a purely
-routine matter, and generally calls for more than one pass through the
-computer}''. Echoing Anscombe's concern after 44 years, some of the highly
-recognized statisticians of our time (Leek, McShane, Gelman, Colquhoun,
-Nuijten and Goodman), wrote in Nature that:
+@url{http://en.wikipedia.org/wiki/Anscombe%27s_quartet,Anscombe's quartet} 
demonstrates how four data sets with widely different shapes (when plotted) 
give nearly identical output from standard regression techniques.
+Anscombe uses this (now famous) quartet, which was introduced in the paper 
quoted above, to argue that ``@emph{Good statistical analysis is not a purely 
routine matter, and generally calls for more than one pass through the 
computer}''.
+Echoing Anscombe's concern after 44 years, some of the highly recognized 
statisticians of our time (Leek, McShane, Gelman, Colquhoun, Nuijten and 
Goodman), wrote in Nature that:
 
 @quotation
-We need to appreciate that data analysis is not purely computational and
-algorithmic — it is a human behaviour....Researchers who hunt hard enough
-will turn up a result that fits statistical criteria — but their discovery
-will probably be a false positive.
+We need to appreciate that data analysis is not purely computational and 
algorithmic -- it is a human behaviour....Researchers who hunt hard enough will 
turn up a result that fits statistical criteria -- but their discovery will 
probably be a false positive.
 @author Five ways to fix statistics, Nature, 551, Nov 2017.
 @end quotation
 
-Users of statistical (scientific) methods (software) are therefore not
-passive (objective) agents in their result. Therefore, it is necessary to
-actually understand the method, not just use it as a black box. The
-subjective experience gained by frequently using a method/software is not
-sufficient to claim an understanding of how the tool/method works and how
-relevant it is to the data and analysis. This kind of subjective experience
-is prone to serious misunderstandings about the data, what the
-software/statistical-method really does (especially as it gets more
-complicated), and thus the scientific interpretation of the result. This
-attitude is further encouraged through non-free
-software@footnote{@url{https://www.gnu.org/philosophy/free-sw.html}},
-poorly written (or non-existent) scientific software manuals, and
-non-reproducible papers@footnote{Where the authors omit many of the
-analysis/processing ``details'' from the paper by arguing that they would
-make the paper too long/unreadable. However, software engineers have been
-dealing with such issues for a long time. There are thus software
-management solutions that allow us to supplement papers with all the
-details necessary to exactly reproduce the result. For example see
-@url{https://doi.org/10.5281/zenodo.1163746, zenodo.1163746} and
-@url{https://doi.org/10.5281/zenodo.1164774, zenodo.1164774} and this @url{
-http://akhlaghi.org/reproducible-science.html, general discussion}.}. This
-approach to scientific software and methods only helps in producing dogmas
-and an ``@emph{obscurantist faith in the expert's special skill, and in his
-personal knowledge and authority}''@footnote{Karl Popper. The logic of
-scientific discovery. 1959. Larger quote is given at the start of the PDF
-(for print) version of this book.}.
+Users of statistical (scientific) methods (software) are therefore not passive 
(objective) agents in their result.
+Therefore, it is necessary to actually understand the method, not just use it 
as a black box.
+The subjective experience gained by frequently using a method/software is not 
sufficient to claim an understanding of how the tool/method works and how 
relevant it is to the data and analysis.
+This kind of subjective experience is prone to serious misunderstandings about 
the data, what the software/statistical-method really does (especially as it 
gets more complicated), and thus the scientific interpretation of the result.
+This attitude is further encouraged through non-free 
software@footnote{@url{https://www.gnu.org/philosophy/free-sw.html}}, poorly 
written (or non-existent) scientific software manuals, and non-reproducible 
papers@footnote{Where the authors omit many of the analysis/processing 
``details'' from the paper by arguing that they would make the paper too 
long/unreadable.
+However, software engineers have been dealing with such issues for a long time.
+There are thus software management solutions that allow us to supplement 
papers with all the details necessary to exactly reproduce the result.
+For example see @url{https://doi.org/10.5281/zenodo.1163746, zenodo.1163746} 
and @url{https://doi.org/10.5281/zenodo.1164774, zenodo.1164774} and this @url{ 
http://akhlaghi.org/reproducible-science.html, general discussion}.}.
+This approach to scientific software and methods only helps in producing 
dogmas and an ``@emph{obscurantist faith in the expert's special skill, and in 
his personal knowledge and authority}''@footnote{Karl Popper. The logic of 
scientific discovery. 1959.
+Larger quote is given at the start of the PDF (for print) version of this 
book.}.
 
 @quotation
 @cindex Douglas Rushkoff
-Program or be programmed. Choose the former, and you gain access to
-the control panel of civilization. Choose the latter, and it could be
-the last real choice you get to make.
+Program or be programmed.
+Choose the former, and you gain access to the control panel of civilization.
+Choose the latter, and it could be the last real choice you get to make.
 @author Douglas Rushkoff. Program or be programmed, O/R Books (2010).
 @end quotation
 
-It is obviously impractical for any one human being to gain the intricate
-knowledge explained above for every step of an analysis. On the other hand,
-scientific data can be large and numerous, for example images produced by
-telescopes in astronomy. This requires efficient algorithms. To make things
-worse, natural scientists have generally not been trained in the advanced
-software techniques, paradigms and architecture that are taught in computer
-science or engineering courses and thus used in most software. The GNU
-Astronomy Utilities are an effort to tackle this issue.
-
-Gnuastro is not just a software, this book is as important to the idea
-behind Gnuastro as the source code (software). This book has tried to learn
-from the success of the ``Numerical Recipes'' book in educating those who
-are not software engineers and computer scientists but still heavy users of
-computational algorithms, like astronomers. There are two major
-differences.
-
-The first difference is that Gnuastro's code and the background information
-are segregated: the code is moved within the actual Gnuastro software
-source code and the underlying explanations are given here in this book. In
-the source code, every non-trivial step is heavily commented and correlated
-with this book, it follows the same logic of this book, and all the
-programs follow a similar internal data, function and file structure, see
-@ref{Program source}. Complementing the code, this book focuses on
-thoroughly explaining the concepts behind those codes (history,
-mathematics, science, software and usage advise when necessary) along with
-detailed instructions on how to run the programs. At the expense of
-frustrating ``professionals'' or ``experts'', this book and the comments in
-the code also intentionally avoid jargon and abbreviations. The source code
-and this book are thus intimately linked, and when considered as a single
-entity can be thought of as a real (an actual software accompanying the
-algorithms) ``Numerical Recipes'' for astronomy.
+It is obviously impractical for any one human being to gain the intricate 
knowledge explained above for every step of an analysis.
+On the other hand, scientific data can be large and numerous, for example 
images produced by telescopes in astronomy.
+This requires efficient algorithms.
+To make things worse, natural scientists have generally not been trained in 
the advanced software techniques, paradigms and architecture that are taught in 
computer science or engineering courses and thus used in most software.
+The GNU Astronomy Utilities are an effort to tackle this issue.
+
+Gnuastro is not just a software, this book is as important to the idea behind 
Gnuastro as the source code (software).
+This book has tried to learn from the success of the ``Numerical Recipes'' 
book in educating those who are not software engineers and computer scientists 
but still heavy users of computational algorithms, like astronomers.
+There are two major differences.
+
+The first difference is that Gnuastro's code and the background information 
are segregated: the code is moved within the actual Gnuastro software source 
code and the underlying explanations are given here in this book.
+In the source code, every non-trivial step is heavily commented and correlated 
with this book, it follows the same logic of this book, and all the programs 
follow a similar internal data, function and file structure, see @ref{Program 
source}.
+Complementing the code, this book focuses on thoroughly explaining the 
concepts behind those codes (history, mathematics, science, software and usage 
advise when necessary) along with detailed instructions on how to run the 
programs.
+At the expense of frustrating ``professionals'' or ``experts'', this book and 
the comments in the code also intentionally avoid jargon and abbreviations.
+The source code and this book are thus intimately linked, and when considered 
as a single entity can be thought of as a real (an actual software accompanying 
the algorithms) ``Numerical Recipes'' for astronomy.
 
 @cindex GNU free documentation license
 @cindex GNU General Public License (GPL)
-The second major, and arguably more important, difference is that
-``Numerical Recipes'' does not allow you to distribute any code that you
-have learned from it. In other words, it does not allow you to release your
-software's source code if you have used their codes, you can only publicly
-release binaries (a black box) to the community. Therefore, while it
-empowers the privileged individual who has access to it, it exacerbates
-social ignorance. Exactly at the opposite end of the spectrum, Gnuastro's
-source code is released under the GNU general public license (GPL) and this
-book is released under the GNU free documentation license. You are
-therefore free to distribute any software you create using parts of
-Gnuastro's source code or text, or figures from this book, see @ref{Your
-rights}.
+The second major, and arguably more important, difference is that ``Numerical 
Recipes'' does not allow you to distribute any code that you have learned from 
it.
+In other words, it does not allow you to release your software's source code 
if you have used their codes, you can only publicly release binaries (a black 
box) to the community.
+Therefore, while it empowers the privileged individual who has access to it, 
it exacerbates social ignorance.
+Exactly at the opposite end of the spectrum, Gnuastro's source code is 
released under the GNU general public license (GPL) and this book is released 
under the GNU free documentation license.
+You are therefore free to distribute any software you create using parts of 
Gnuastro's source code or text, or figures from this book, see @ref{Your 
rights}.
 
 With these principles in mind, Gnuastro's developers aim to impose the
 minimum requirements on you (in computer science, engineering and even the
@@ -999,77 +890,41 @@ philosophy}.
 
 @cindex Brahe, Tycho
 @cindex Galileo, Galilei
-Without prior familiarity and experience with optics, it is hard to imagine
-how, Galileo could have come up with the idea of modifying the Dutch
-military telescope optics to use in astronomy. Astronomical objects could
-not be seen with the Dutch military design of the telescope. In other
-words, it is unlikely that Galileo could have asked a random optician to
-make modifications (not understood by Galileo) to the Dutch design, to do
-something no astronomer of the time took seriously. In the paradigm of the
-day, what could be the purpose of enlarging geometric spheres (planets) or
-points (stars)? In that paradigm only the position and movement of the
-heavenly bodies was important, and that had already been accurately studied
-(recently by Tycho Brahe).
-
-In the beginning of his ``The Sidereal Messenger'' (published in 1610) he
-cautions the readers on this issue and @emph{before} describing his
-results/observations, Galileo instructs us on how to build a suitable
-instrument. Without a detailed description of @emph{how} he made his tools
-and done his observations, no reasonable person would believe his
-results. Before he actually saw the moons of Jupiter, the mountains on the
-Moon or the crescent of Venus, Galileo was “evasive”@footnote{Galileo
-G. (Translated by Maurice A. Finocchiaro). @emph{The essential
-Galileo}. Hackett publishing company, first edition, 2008.} to
-Kepler. Science is defined by its tools/methods, @emph{not} its raw
-results@footnote{For example, take the following two results on the age of
-the universe: roughly 14 billion years (suggested by the current consensus
-of the standard model of cosmology) and less than 10,000 years (suggested
-from some interpretations of the Bible). Both these numbers are
-@emph{results}. What distinguishes these two results, is the tools/methods
-that were used to derive them. Therefore, as the term ``Scientific method''
-also signifies, a scientific statement it defined by its @emph{method}, not
-its result.}.
-
-The same is true today: science cannot progress with a black box, or poorly
-released code. The source code of a research is the new (abstractified)
-communication language in science, understandable by humans @emph{and}
-computers. Source code (in any programming language) is a language/notation
-designed to express all the details that would be too
-tedious/long/frustrating to report in spoken languages like English,
-similar to mathematic notation.
-
-Today, the quality of the source code that goes into a scientific result
-(and the distribution of that code) is as critical to scientific vitality
-and integrity, as the quality of its written language/English used in
-publishing/distributing its paper. A scientific paper will not even be
-reviewed by any respectable journal if its written in a poor
-language/English. A similar level of quality assessment is thus
-increasingly becoming necessary regarding the codes/methods used to derive
-the results of a scientific paper.
+Without prior familiarity and experience with optics, it is hard to imagine 
how, Galileo could have come up with the idea of modifying the Dutch military 
telescope optics to use in astronomy.
+Astronomical objects could not be seen with the Dutch military design of the 
telescope.
+In other words, it is unlikely that Galileo could have asked a random optician 
to make modifications (not understood by Galileo) to the Dutch design, to do 
something no astronomer of the time took seriously.
+In the paradigm of the day, what could be the purpose of enlarging geometric 
spheres (planets) or points (stars)? In that paradigm only the position and 
movement of the heavenly bodies was important, and that had already been 
accurately studied (recently by Tycho Brahe).
+
+In the beginning of his ``The Sidereal Messenger'' (published in 1610) he 
cautions the readers on this issue and @emph{before} describing his 
results/observations, Galileo instructs us on how to build a suitable 
instrument.
+Without a detailed description of @emph{how} he made his tools and done his 
observations, no reasonable person would believe his results.
+Before he actually saw the moons of Jupiter, the mountains on the Moon or the 
crescent of Venus, Galileo was “evasive”@footnote{Galileo G. (Translated by 
Maurice A. Finocchiaro). @emph{The essential Galileo}.Hackett publishing 
company, first edition, 2008.} to Kepler.
+Science is defined by its tools/methods, @emph{not} its raw 
results@footnote{For example, take the following two results on the age of the 
universe: roughly 14 billion years (suggested by the current consensus of the 
standard model of cosmology) and less than 10,000 years (suggested from some 
interpretations of the Bible).
+Both these numbers are @emph{results}.
+What distinguishes these two results, is the tools/methods that were used to 
derive them.
+Therefore, as the term ``Scientific method'' also signifies, a scientific 
statement it defined by its @emph{method}, not its result.}.
+
+The same is true today: science cannot progress with a black box, or poorly 
released code.
+The source code of a research is the new (abstractified) communication 
language in science, understandable by humans @emph{and} computers.
+Source code (in any programming language) is a language/notation designed to 
express all the details that would be too tedious/long/frustrating to report in 
spoken languages like English, similar to mathematic notation.
+
+Today, the quality of the source code that goes into a scientific result (and 
the distribution of that code) is as critical to scientific vitality and 
integrity, as the quality of its written language/English used in 
publishing/distributing its paper.
+A scientific paper will not even be reviewed by any respectable journal if its 
written in a poor language/English.
+A similar level of quality assessment is thus increasingly becoming necessary 
regarding the codes/methods used to derive the results of a scientific paper.
 
 @cindex Ken Thomson
 @cindex Stroustrup, Bjarne
-Bjarne Stroustrup (creator of the C++ language) says: ``@emph{Without
-understanding software, you are reduced to believing in magic}''.  Ken
-Thomson (the designer or the Unix operating system) says ``@emph{I abhor a
-system designed for the `user' if that word is a coded pejorative meaning
-`stupid and unsophisticated'}.'' Certainly no scientist (user of a
-scientific software) would want to be considered a believer in magic, or
-stupid and unsophisticated.
-
-This can happen when scientists get too distant from the raw data and
-methods, and are mainly discussing results. In other words, when they feel
-they have tamed Nature into their own high-level (abstract) models
-(creations), and are mainly concerned with scaling up, or industrializing
-those results. Roughly five years before special relativity, and about two
-decades before quantum mechanics fundamentally changed Physics, Lord Kelvin
-is quoted as saying:
+Bjarne Stroustrup (creator of the C++ language) says: ``@emph{Without 
understanding software, you are reduced to believing in magic}''.
+Ken Thomson (the designer or the Unix operating system) says ``@emph{I abhor a 
system designed for the `user' if that word is a coded pejorative meaning 
`stupid and unsophisticated'}.'' Certainly no scientist (user of a scientific 
software) would want to be considered a believer in magic, or stupid and 
unsophisticated.
+
+This can happen when scientists get too distant from the raw data and methods, 
and are mainly discussing results.
+In other words, when they feel they have tamed Nature into their own 
high-level (abstract) models (creations), and are mainly concerned with scaling 
up, or industrializing those results.
+Roughly five years before special relativity, and about two decades before 
quantum mechanics fundamentally changed Physics, Lord Kelvin is quoted as 
saying:
 
 @quotation
 @cindex Lord Kelvin
 @cindex William Thomson
-There is nothing new to be discovered in physics now. All that remains
-is more and more precise measurement.
+There is nothing new to be discovered in physics now.
+All that remains is more and more precise measurement.
 @author William Thomson (Lord Kelvin), 1900
 @end quotation
 
@@ -1079,36 +934,21 @@ A few years earlier Albert. A. Michelson made the 
following statement:
 @quotation
 @cindex Albert. A. Michelson
 @cindex Michelson, Albert. A.
-The more important fundamental laws and facts of physical science have
-all been discovered, and these are now so firmly established that the
-possibility of their ever being supplanted in consequence of new
-discoveries is exceedingly remote.... Our future discoveries must be
-looked for in the sixth place of decimals.
+The more important fundamental laws and facts of physical science have all 
been discovered, and these are now so firmly established that the possibility 
of their ever being supplanted in consequence of new discoveries is exceedingly 
remote....
+Our future discoveries must be looked for in the sixth place of decimals.
 @author Albert. A. Michelson, dedication of Ryerson Physics Lab, U. Chicago 
1894
 @end quotation
 
 @cindex Puzzle solving scientist
 @cindex Scientist, puzzle solver
-If scientists are considered to be more than mere ``puzzle''
-solvers@footnote{Thomas S. Kuhn. @emph{The Structure of Scientific
-Revolutions}, University of Chicago Press, 1962.} (simply adding to the
-decimals of existing values or observing a feature in 10, 100, or 100000
-more galaxies or stars, as Kelvin and Michelson clearly believed), they
-cannot just passively sit back and uncritically repeat the previous
-(observational or theoretical) methods/tools on new data. Today there is a
-wealth of raw telescope images ready (mostly for free) at the finger tips
-of anyone who is interested with a fast enough internet connection to
-download them. The only thing lacking is new ways to analyze this data and
-dig out the treasure that is lying hidden in them to existing methods and
-techniques.
+If scientists are considered to be more than mere ``puzzle'' 
solvers@footnote{Thomas S. Kuhn. @emph{The Structure of Scientific 
Revolutions}, University of Chicago Press, 1962.} (simply adding to the 
decimals of existing values or observing a feature in 10, 100, or 100000 more 
galaxies or stars, as Kelvin and Michelson clearly believed), they cannot just 
passively sit back and uncritically repeat the previous (observational or 
theoretical) methods/tools on new data.
+Today there is a wealth of raw telescope images ready (mostly for free) at the 
finger tips of anyone who is interested with a fast enough internet connection 
to download them.
+The only thing lacking is new ways to analyze this data and dig out the 
treasure that is lying hidden in them to existing methods and techniques.
 
 @quotation
 @cindex Jaynes E. T.
-New data that we insist on analyzing in terms of old ideas (that is,
-old models which are not questioned) cannot lead us out of the old
-ideas. However many data we record and analyze, we may just keep
-repeating the same old errors, missing the same crucially important
-things that the experiment was competent to find.
+New data that we insist on analyzing in terms of old ideas (that is, old 
models which are not questioned) cannot lead us out of the old ideas.
+However many data we record and analyze, we may just keep repeating the same 
old errors, missing the same crucially important things that the experiment was 
competent to find.
 @author Jaynes, Probability theory, the logic of science. Cambridge U. Press 
(2003).
 @end quotation
 
@@ -1119,52 +959,29 @@ things that the experiment was competent to find.
 @section Your rights
 
 @cindex GNU Texinfo
-The paragraphs below, in this section, belong to the GNU
-Texinfo@footnote{Texinfo is the GNU documentation system. It is used
-to create this book in all the various formats.} manual and are not
-written by us! The name ``Texinfo'' is just changed to ``GNU Astronomy
-Utilities'' or ``Gnuastro'' because they are released under the same
-licenses and it is beautifully written to inform you of your rights.
+The paragraphs below, in this section, belong to the GNU 
Texinfo@footnote{Texinfo is the GNU documentation system.
+It is used to create this book in all the various formats.} manual and are not 
written by us! The name ``Texinfo'' is just changed to ``GNU Astronomy 
Utilities'' or ``Gnuastro'' because they are released under the same licenses 
and it is beautifully written to inform you of your rights.
 
 @cindex Free software
 @cindex Copyright
 @cindex Public domain
-GNU Astronomy Utilities is ``free software''; this means that everyone
-is free to use it and free to redistribute it on certain
-conditions. Gnuastro is not in the public domain; it is copyrighted
-and there are restrictions on its distribution, but these restrictions
-are designed to permit everything that a good cooperating citizen
-would want to do.  What is not allowed is to try to prevent others
-from further sharing any version of Gnuastro that they might get from
-you.
-
-Specifically, we want to make sure that you have the right to give
-away copies of the programs that relate to Gnuastro, that you receive
-the source code or else can get it if you want it, that you can change
-these programs or use pieces of them in new free programs, and that
-you know you can do these things.
-
-To make sure that everyone has such rights, we have to forbid you to
-deprive anyone else of these rights.  For example, if you distribute
-copies of the Gnuastro related programs, you must give the recipients
-all the rights that you have.  You must make sure that they, too,
-receive or can get the source code.  And you must tell them their
-rights.
-
-Also, for our own protection, we must make certain that everyone finds
-out that there is no warranty for the programs that relate to Gnuastro.
-If these programs are modified by someone else and passed on, we want
-their recipients to know that what they have is not what we distributed,
-so that any problems introduced by others will not reflect on our
-reputation.
+GNU Astronomy Utilities is ``free software''; this means that everyone is free 
to use it and free to redistribute it on certain conditions.
+Gnuastro is not in the public domain; it is copyrighted and there are 
restrictions on its distribution, but these restrictions are designed to permit 
everything that a good cooperating citizen would want to do.
+What is not allowed is to try to prevent others from further sharing any 
version of Gnuastro that they might get from you.
+
+Specifically, we want to make sure that you have the right to give away copies 
of the programs that relate to Gnuastro, that you receive the source code or 
else can get it if you want it, that you can change these programs or use 
pieces of them in new free programs, and that you know you can do these things.
+
+To make sure that everyone has such rights, we have to forbid you to deprive 
anyone else of these rights.
+For example, if you distribute copies of the Gnuastro related programs, you 
must give the recipients all the rights that you have.
+You must make sure that they, too, receive or can get the source code.
+And you must tell them their rights.
+
+Also, for our own protection, we must make certain that everyone finds out 
that there is no warranty for the programs that relate to Gnuastro.
+If these programs are modified by someone else and passed on, we want their 
recipients to know that what they have is not what we distributed, so that any 
problems introduced by others will not reflect on our reputation.
 
 @cindex GNU General Public License (GPL)
 @cindex GNU Free Documentation License
-The full text of the licenses for the Gnuastro book and software can be
-respectively found in @ref{GNU General Public License}@footnote{Also
-available in @url{http://www.gnu.org/copyleft/gpl.html}} and @ref{GNU Free
-Doc. License}@footnote{Also available in
-@url{http://www.gnu.org/copyleft/fdl.html}}.
+The full text of the licenses for the Gnuastro book and software can be 
respectively found in @ref{GNU General Public License}@footnote{Also available 
in @url{http://www.gnu.org/copyleft/gpl.html}} and @ref{GNU Free Doc. 
License}@footnote{Also available in @url{http://www.gnu.org/copyleft/fdl.html}}.
 
 
 
@@ -1173,24 +990,17 @@ Doc. License}@footnote{Also available in
 
 @cindex Names, programs
 @cindex Program names
-Gnuastro is a package of independent programs and a collection of
-libraries, here we are mainly concerned with the programs. Each program has
-an official name which consists of one or two words, describing what they
-do. The latter are printed with no space, for example NoiseChisel or
-Crop. On the command-line, you can run them with their executable names
-which start with an @file{ast} and might be an abbreviation of the official
-name, for example @file{astnoisechisel} or @file{astcrop}, see
-@ref{Executable names}.
+Gnuastro is a package of independent programs and a collection of libraries, 
here we are mainly concerned with the programs.
+Each program has an official name which consists of one or two words, 
describing what they do.
+The latter are printed with no space, for example NoiseChisel or Crop.
+On the command-line, you can run them with their executable names which start 
with an @file{ast} and might be an abbreviation of the official name, for 
example @file{astnoisechisel} or @file{astcrop}, see @ref{Executable names}.
 
 @pindex ProgramName
 @pindex @file{astprogname}
-We will use ``ProgramName'' for a generic official program name and
-@file{astprogname} for a generic executable name. In this book, the
-programs are classified based on what they do and thoroughly explained. An
-alphabetical list of the programs that are installed on your system with
-this installation are given in @ref{Gnuastro programs list}. That list also
-contains the executable names and version numbers along with a one line
-description.
+We will use ``ProgramName'' for a generic official program name and 
@file{astprogname} for a generic executable name.
+In this book, the programs are classified based on what they do and thoroughly 
explained.
+An alphabetical list of the programs that are installed on your system with 
this installation are given in @ref{Gnuastro programs list}.
+That list also contains the executable names and version numbers along with a 
one line description.
 
 
 
@@ -1202,54 +1012,31 @@ description.
 @cindex Major version number
 @cindex Minor version number
 @cindex Mailing list: info-gnuastro
-Gnuastro can have two formats of version numbers, for official and
-unofficial releases. Official Gnuastro releases are announced on the
-@command{info-gnuastro} mailing list, they have a version control tag in
-Gnuastro's development history, and their version numbers are formatted
-like ``@file{A.B}''. @file{A} is a major version number, marking a
-significant planned achievement (for example see @ref{GNU Astronomy
-Utilities 1.0}), while @file{B} is a minor version number, see below for
-more on the distinction. Note that the numbers are not decimals, so version
-2.34 is much more recent than version 2.5, which is not equal to 2.50.
-
-Gnuastro also allows a unique version number for unofficial
-releases. Unofficial releases can mark any point in Gnuastro's development
-history. This is done to allow astronomers to easily use any point in the
-version controlled history for their data-analysis and research
-publication. See @ref{Version controlled source} for a complete
-introduction. This section is not just for developers and is intended to
-straightforward and easy to read, so please have a look if you are
-interested in the cutting-edge. This unofficial version number is a
-meaningful and easy to read string of characters, unique to that particular
-point of history. With this feature, users can easily stay up to date with
-the most recent bug fixes and additions that are committed between official
-releases.
-
-The unofficial version number is formatted like: @file{A.B.C-D}. @file{A}
-and @file{B} are the most recent official version number. @file{C} is the
-number of commits that have been made after version @file{A.B}. @file{D} is
-the first 4 or 5 characters of the commit hash number@footnote{Each point
-in Gnuastro's history is uniquely identified with a 40 character long hash
-which is created from its contents and previous history for example:
-@code{5b17501d8f29ba3cd610673261e6e2229c846d35}. So the string @file{D} in
-the version for this commit could be @file{5b17}, or
-@file{5b175}.}. Therefore, the unofficial version number
-`@code{3.92.8-29c8}', corresponds to the 8th commit after the official
-version @code{3.92} and its commit hash begins with @code{29c8}. The
-unofficial version number is sort-able (unlike the raw hash) and as shown
-above is descriptive of the state of the unofficial release. Of course an
-official release is preferred for publication (since its tarballs are
-easily available and it has gone through more tests, making it more
-stable), so if an official release is announced prior to your publication's
-final review, please consider updating to the official release.
-
-The major version number is set by a major goal which is defined by the
-developers and user community before hand, for example see @ref{GNU
-Astronomy Utilities 1.0}. The incremental work done in minor releases are
-commonly small steps in achieving the major goal. Therefore, there is no
-limit on the number of minor releases and the difference between the
-(hypothetical) versions 2.927 and 3.0 can be a small (negligible to the
-user) improvement that finalizes the defined goals.
+Gnuastro can have two formats of version numbers, for official and unofficial 
releases.
+Official Gnuastro releases are announced on the @command{info-gnuastro} 
mailing list, they have a version control tag in Gnuastro's development 
history, and their version numbers are formatted like ``@file{A.B}''.
+@file{A} is a major version number, marking a significant planned achievement 
(for example see @ref{GNU Astronomy Utilities 1.0}), while @file{B} is a minor 
version number, see below for more on the distinction.
+Note that the numbers are not decimals, so version 2.34 is much more recent 
than version 2.5, which is not equal to 2.50.
+
+Gnuastro also allows a unique version number for unofficial releases.
+Unofficial releases can mark any point in Gnuastro's development history.
+This is done to allow astronomers to easily use any point in the version 
controlled history for their data-analysis and research publication.
+See @ref{Version controlled source} for a complete introduction.
+This section is not just for developers and is intended to straightforward and 
easy to read, so please have a look if you are interested in the cutting-edge.
+This unofficial version number is a meaningful and easy to read string of 
characters, unique to that particular point of history.
+With this feature, users can easily stay up to date with the most recent bug 
fixes and additions that are committed between official releases.
+
+The unofficial version number is formatted like: @file{A.B.C-D}.
+@file{A} and @file{B} are the most recent official version number.
+@file{C} is the number of commits that have been made after version @file{A.B}.
+@file{D} is the first 4 or 5 characters of the commit hash 
number@footnote{Each point in Gnuastro's history is uniquely identified with a 
40 character long hash which is created from its contents and previous history 
for example: @code{5b17501d8f29ba3cd610673261e6e2229c846d35}.
+So the string @file{D} in the version for this commit could be @file{5b17}, or 
@file{5b175}.}.
+Therefore, the unofficial version number `@code{3.92.8-29c8}', corresponds to 
the 8th commit after the official version @code{3.92} and its commit hash 
begins with @code{29c8}.
+The unofficial version number is sort-able (unlike the raw hash) and as shown 
above is descriptive of the state of the unofficial release.
+Of course an official release is preferred for publication (since its tarballs 
are easily available and it has gone through more tests, making it more 
stable), so if an official release is announced prior to your publication's 
final review, please consider updating to the official release.
+
+The major version number is set by a major goal which is defined by the 
developers and user community before hand, for example see @ref{GNU Astronomy 
Utilities 1.0}.
+The incremental work done in minor releases are commonly small steps in 
achieving the major goal.
+Therefore, there is no limit on the number of minor releases and the 
difference between the (hypothetical) versions 2.927 and 3.0 can be a small 
(negligible to the user) improvement that finalizes the defined goals.
 
 @menu
 * GNU Astronomy Utilities 1.0::  Plans for version 1.0 release
@@ -1258,34 +1045,21 @@ user) improvement that finalizes the defined goals.
 @node GNU Astronomy Utilities 1.0,  , Version numbering, Version numbering
 @subsection GNU Astronomy Utilities 1.0
 @cindex Gnuastro major version number
-Currently (prior to Gnuastro 1.0), the aim of Gnuastro is to have a
-complete system for data manipulation and analysis at least similar to
-IRAF@footnote{@url{http://iraf.noao.edu/}}. So an astronomer can take all
-the standard data analysis steps (starting from raw data to the final
-reduced product and standard post-reduction tools) with the various
-programs in Gnuastro.
+Currently (prior to Gnuastro 1.0), the aim of Gnuastro is to have a complete 
system for data manipulation and analysis at least similar to 
IRAF@footnote{@url{http://iraf.noao.edu/}}.
+So an astronomer can take all the standard data analysis steps (starting from 
raw data to the final reduced product and standard post-reduction tools) with 
the various programs in Gnuastro.
 
 @cindex Shell script
-The maintainers of each camera or detector on a telescope can provide a
-completely transparent shell script or Makefile to the observer for data
-analysis. This script can set configuration files for all the required
-programs to work with that particular camera. The script can then run the
-proper programs in the proper sequence. The user/observer can easily follow
-the standard shell script to understand (and modify) each step and the
-parameters used easily. Bash (or other modern GNU/Linux shell scripts) is
-powerful and made for this gluing job. This will simultaneously improve
-performance and transparency. Shell scripting (or Makefiles) are also basic
-constructs that are easy to learn and readily available as part of the
-Unix-like operating systems. If there is no program to do a desired step,
-Gnuastro's libraries can be used to build specific programs.
+The maintainers of each camera or detector on a telescope can provide a 
completely transparent shell script or Makefile to the observer for data 
analysis.
+This script can set configuration files for all the required programs to work 
with that particular camera.
+The script can then run the proper programs in the proper sequence.
+The user/observer can easily follow the standard shell script to understand 
(and modify) each step and the parameters used easily.
+Bash (or other modern GNU/Linux shell scripts) is powerful and made for this 
gluing job.
+This will simultaneously improve performance and transparency.
+Shell scripting (or Makefiles) are also basic constructs that are easy to 
learn and readily available as part of the Unix-like operating systems.
+If there is no program to do a desired step, Gnuastro's libraries can be used 
to build specific programs.
 
-The main factor is that all observatories or projects can freely contribute
-to Gnuastro and all simultaneously benefit from it (since it doesn't belong
-to any particular one of them), much like how for-profit organizations (for
-example RedHat, or Intel and many others) are major contributors to free
-and open source software for their shared benefit. Gnuastro's copyright has
-been fully awarded to GNU, so it doesn't belong to any particular
-astronomer or astronomical facility or project.
+The main factor is that all observatories or projects can freely contribute to 
Gnuastro and all simultaneously benefit from it (since it doesn't belong to any 
particular one of them), much like how for-profit organizations (for example 
RedHat, or Intel and many others) are major contributors to free and open 
source software for their shared benefit.
+Gnuastro's copyright has been fully awarded to GNU, so it doesn't belong to 
any particular astronomer or astronomical facility or project.
 
 
 
@@ -1294,24 +1068,15 @@ astronomer or astronomical facility or project.
 @node New to GNU/Linux?, Report a bug, Version numbering, Introduction
 @section New to GNU/Linux?
 
-Some astronomers initially install and use a GNU/Linux operating system
-because their necessary tools can only be installed in this environment.
-However, the transition is not necessarily easy. To encourage you in
-investing the patience and time to make this transition, and actually enjoy
-it, we will first start with a basic introduction to GNU/Linux operating
-systems. Afterwards, in @ref{Command-line interface} we'll discuss the
-wonderful benefits of the command-line interface, how it beautifully
-complements the graphic user interface, and why it is worth the (apparently
-steep) learning curve. Finally a complete chapter (@ref{Tutorials}) is
-devoted to real world scenarios of using Gnuastro (on the
-command-line). Therefore if you don't yet feel comfortable with the
-command-line we strongly recommend going through that chapter after
-finishing this section.
-
-You might have already noticed that we are not using the name ``Linux'',
-but ``GNU/Linux''. Please take the time to have a look at the following
-essays and FAQs for a complete understanding of this very important
-distinction.
+Some astronomers initially install and use a GNU/Linux operating system 
because their necessary tools can only be installed in this environment.
+However, the transition is not necessarily easy.
+To encourage you in investing the patience and time to make this transition, 
and actually enjoy it, we will first start with a basic introduction to 
GNU/Linux operating systems.
+Afterwards, in @ref{Command-line interface} we'll discuss the wonderful 
benefits of the command-line interface, how it beautifully complements the 
graphic user interface, and why it is worth the (apparently steep) learning 
curve.
+Finally a complete chapter (@ref{Tutorials}) is devoted to real world 
scenarios of using Gnuastro (on the command-line).
+Therefore if you don't yet feel comfortable with the command-line we strongly 
recommend going through that chapter after finishing this section.
+
+You might have already noticed that we are not using the name ``Linux'', but 
``GNU/Linux''.
+Please take the time to have a look at the following essays and FAQs for a 
complete understanding of this very important distinction.
 
 @itemize
 
@@ -1333,20 +1098,12 @@ distinction.
 @cindex GNU/Linux
 @cindex GNU C library
 @cindex GNU Compiler Collection
-In short, the Linux kernel@footnote{In Unix-like operating systems, the
-kernel connects software and hardware worlds.} is built using the GNU C
-library (glibc) and GNU compiler collection (gcc). The Linux kernel
-software alone is just a means for other software to access the hardware
-resources, it is useless alone: to say “running Linux”, is like saying
-“driving your carburetor”.
-
-To have an operating system, you need lower-level (to build the kernel),
-and higher-level (to use it) software packages. The majority of such
-software in most Unix-like operating systems are GNU software: ``the whole
-system is basically GNU with Linux loaded''. Therefore to acknowledge GNU's
-instrumental role in the creation and usage of the Linux kernel and the
-operating systems that use it, we should call these operating systems
-``GNU/Linux''.
+In short, the Linux kernel@footnote{In Unix-like operating systems, the kernel 
connects software and hardware worlds.} is built using the GNU C library 
(glibc) and GNU compiler collection (gcc).
+The Linux kernel software alone is just a means for other software to access 
the hardware resources, it is useless alone: to say ``running Linux'', is like 
saying ``driving your carburetor''.
+
+To have an operating system, you need lower-level (to build the kernel), and 
higher-level (to use it) software packages.
+The majority of such software in most Unix-like operating systems are GNU 
software: ``the whole system is basically GNU with Linux loaded''.
+Therefore to acknowledge GNU's instrumental role in the creation and usage of 
the Linux kernel and the operating systems that use it, we should call these 
operating systems ``GNU/Linux''.
 
 
 @menu
@@ -1360,113 +1117,67 @@ operating systems that use it, we should call these 
operating systems
 @cindex Command-line user interface
 @cindex GUI: graphic user interface
 @cindex CLI: command-line user interface
-One aspect of Gnuastro that might be a little troubling to new GNU/Linux
-users is that (at least for the time being) it only has a command-line user
-interface (CLI). This might be contrary to the mostly graphical user
-interface (GUI) experience with proprietary operating systems. Since the
-various actions available aren't always on the screen, the command-line
-interface can be complicated, intimidating, and frustrating for a
-first-time user. This is understandable and also experienced by anyone who
-started using the computer (from childhood) in a graphical user interface
-(this includes most of Gnuastro's authors). Here we hope to convince you of
-the unique benefits of this interface which can greatly enhance your
-productivity while complementing your GUI experience.
+One aspect of Gnuastro that might be a little troubling to new GNU/Linux users 
is that (at least for the time being) it only has a command-line user interface 
(CLI).
+This might be contrary to the mostly graphical user interface (GUI) experience 
with proprietary operating systems.
+Since the various actions available aren't always on the screen, the 
command-line interface can be complicated, intimidating, and frustrating for a 
first-time user.
+This is understandable and also experienced by anyone who started using the 
computer (from childhood) in a graphical user interface (this includes most of 
Gnuastro's authors).
+Here we hope to convince you of the unique benefits of this interface which 
can greatly enhance your productivity while complementing your GUI experience.
 
 @cindex GNOME 3
-Through GNOME 3@footnote{@url{http://www.gnome.org/}}, most GNU/Linux based
-operating systems now have an advanced and useful GUI. Since the GUI was
-created long after the command-line, some wrongly consider the command line
-to be obsolete. Both interfaces are useful for different tasks. For example
-you can't view an image, video, pdf document or web page on the
-command-line. On the other hand you can't reproduce your results easily in
-the GUI. Therefore they should not be regarded as rivals but as
-complementary user interfaces, here we will outline how the CLI can be
-useful in scientific programs.
-
-You can think of the GUI as a veneer over the CLI to facilitate a small
-subset of all the possible CLI operations. Each click you do on the GUI,
-can be thought of as internally running a different CLI command. So
-asymptotically (if a good designer can design a GUI which is able to show
-you all the possibilities to click on) the GUI is only as powerful as the
-command-line. In practice, such graphical designers are very hard to find
-for every program, so the GUI operations are always a subset of the
-internal CLI commands. For programs that are only made for the GUI, this
-results in not including lots of potentially useful operations. It also
-results in `interface design' to be a crucially important part of any GUI
-program. Scientists don't usually have enough resources to hire a graphical
-designer, also the complexity of the GUI code is far more than CLI code,
-which is harmful for a scientific software, see @ref{Science and its
-tools}.
+Through GNOME 3@footnote{@url{http://www.gnome.org/}}, most GNU/Linux based 
operating systems now have an advanced and useful GUI.
+Since the GUI was created long after the command-line, some wrongly consider 
the command line to be obsolete.
+Both interfaces are useful for different tasks.
+For example you can't view an image, video, pdf document or web page on the 
command-line.
+On the other hand you can't reproduce your results easily in the GUI.
+Therefore they should not be regarded as rivals but as complementary user 
interfaces, here we will outline how the CLI can be useful in scientific 
programs.
+
+You can think of the GUI as a veneer over the CLI to facilitate a small subset 
of all the possible CLI operations.
+Each click you do on the GUI, can be thought of as internally running a 
different CLI command.
+So asymptotically (if a good designer can design a GUI which is able to show 
you all the possibilities to click on) the GUI is only as powerful as the 
command-line.
+In practice, such graphical designers are very hard to find for every program, 
so the GUI operations are always a subset of the internal CLI commands.
+For programs that are only made for the GUI, this results in not including 
lots of potentially useful operations.
+It also results in `interface design' to be a crucially important part of any 
GUI program.
+Scientists don't usually have enough resources to hire a graphical designer, 
also the complexity of the GUI code is far more than CLI code, which is harmful 
for a scientific software, see @ref{Science and its tools}.
 
 @cindex GUI: repeating operations
-For programs that have a GUI, one action on the GUI (moving and clicking a
-mouse, or tapping a touchscreen) might be more efficient and easier than
-its CLI counterpart (typing the program name and your desired
-configuration). However, if you have to repeat that same action more than
-once, the GUI will soon become frustrating and prone to errors. Unless the
-designers of a particular program decided to design such a system for a
-particular GUI action, there is no general way to run any possible series
-of actions automatically on the GUI.
+For programs that have a GUI, one action on the GUI (moving and clicking a 
mouse, or tapping a touchscreen) might be more efficient and easier than its 
CLI counterpart (typing the program name and your desired configuration).
+However, if you have to repeat that same action more than once, the GUI will 
soon become frustrating and prone to errors.
+Unless the designers of a particular program decided to design such a system 
for a particular GUI action, there is no general way to run any possible series 
of actions automatically on the GUI.
 
 @cindex GNU Bash
 @cindex Reproducible results
 @cindex CLI: repeating operations
-On the command-line, you can run any series of of actions which can come
-from various CLI capable programs you have decided your self in any
-possible permutation with one command@footnote{By writing a shell script
-and running it, for example see the tutorials in @ref{Tutorials}.}. This
-allows for much more creativity and exact reproducibility that is not
-possible to a GUI user. For technical and scientific operations, where the
-same operation (using various programs) has to be done on a large set of
-data files, this is crucially important. It also allows exact
-reproducibility which is a foundation principle for scientific results. The
-most common CLI (which is also known as a shell) in GNU/Linux is GNU Bash,
-we strongly encourage you to put aside several hours and go through this
-beautifully explained web page:
-@url{https://flossmanuals.net/command-line/}. You don't need to read or
-even fully understand the whole thing, only a general knowledge of the
-first few chapters are enough to get you going.
-
-Since the operations in the GUI are limited and they are visible, reading a
-manual is not that important in the GUI (most programs don't even have
-any!). However, to give you the creative power explained above, with a CLI
-program, it is best if you first read the manual of any program you are
-using. You don't need to memorize any details, only an understanding of the
-generalities is needed. Once you start working, there are more easier ways
-to remember a particular option or operation detail, see @ref{Getting
-help}.
+On the command-line, you can run any series of of actions which can come from 
various CLI capable programs you have decided your self in any possible 
permutation with one command@footnote{By writing a shell script and running it, 
for example see the tutorials in @ref{Tutorials}.}.
+This allows for much more creativity and exact reproducibility that is not 
possible to a GUI user.
+For technical and scientific operations, where the same operation (using 
various programs) has to be done on a large set of data files, this is 
crucially important.
+It also allows exact reproducibility which is a foundation principle for 
scientific results.
+The most common CLI (which is also known as a shell) in GNU/Linux is GNU Bash, 
we strongly encourage you to put aside several hours and go through this 
beautifully explained web page: @url{https://flossmanuals.net/command-line/}.
+You don't need to read or even fully understand the whole thing, only a 
general knowledge of the first few chapters are enough to get you going.
+
+Since the operations in the GUI are limited and they are visible, reading a 
manual is not that important in the GUI (most programs don't even have any!).
+However, to give you the creative power explained above, with a CLI program, 
it is best if you first read the manual of any program you are using.
+You don't need to memorize any details, only an understanding of the 
generalities is needed.
+Once you start working, there are more easier ways to remember a particular 
option or operation detail, see @ref{Getting help}.
 
 @cindex GNU Emacs
 @cindex Virtual console
-To experience the command-line in its full glory and not in the GUI
-terminal emulator, press the following keys together:
-@key{CTRL+ALT+F4}@footnote{Instead of @key{F4}, you can use any of the keys
-from @key{F1} to @key{F6} for different virtual consoles depending on your
-GNU/Linux distribution, try them all out. You can also run a separate GUI
-from within this console if you want to.} to access the virtual console. To
-return back to your GUI, press the same keys above replacing @key{F4} with
-@key{F7} (or @key{F1}, or @key{F2}, depending on your GNU/Linux
-distribution). In the virtual console, the GUI, with all its distracting
-colors and information, is gone. Enabling you to focus entirely on your
-actual work.
+To experience the command-line in its full glory and not in the GUI terminal 
emulator, press the following keys together: @key{CTRL+ALT+F4}@footnote{Instead 
of @key{F4}, you can use any of the keys from @key{F1} to @key{F6} for 
different virtual consoles depending on your GNU/Linux distribution, try them 
all out.
+You can also run a separate GUI from within this console if you want to.} to 
access the virtual console.
+To return back to your GUI, press the same keys above replacing @key{F4} with 
@key{F7} (or @key{F1}, or @key{F2}, depending on your GNU/Linux distribution).
+In the virtual console, the GUI, with all its distracting colors and 
information, is gone.
+Enabling you to focus entirely on your actual work.
 
 @cindex Resource heavy operations
-For operations that use a lot of your system's resources (processing a
-large number of large astronomical images for example), the virtual
-console is the place to run them. This is because the GUI is not
-competing with your research work for your system's RAM and CPU. Since
-the virtual consoles are completely independent, you can even log out
-of your GUI environment to give even more of your hardware resources
-to the programs you are running and thus reduce the operating time.
+For operations that use a lot of your system's resources (processing a large 
number of large astronomical images for example), the virtual console is the 
place to run them.
+This is because the GUI is not competing with your research work for your 
system's RAM and CPU.
+Since the virtual consoles are completely independent, you can even log out of 
your GUI environment to give even more of your hardware resources to the 
programs you are running and thus reduce the operating time.
 
 @cindex Secure shell
 @cindex SSH
 @cindex Remote operation
-Since it uses far less system resources, the CLI is also convenient for
-remote access to your computer. Using secure shell (SSH) you can log in
-securely to your system (similar to the virtual console) from anywhere even
-if the connection speeds are low. There are apps for smart phones and
-tablets which allow you to do this.
+Since it uses far less system resources, the CLI is also convenient for remote 
access to your computer.
+Using secure shell (SSH) you can log in securely to your system (similar to 
the virtual console) from anywhere even if the connection speeds are low.
+There are apps for smart phones and tablets which allow you to do this.
 
 
 
@@ -1484,79 +1195,48 @@ tablets which allow you to do this.
 @cindex Halted program
 @cindex Program crashing
 @cindex Inconsistent results
-According to Wikipedia ``a software bug is an error, flaw, failure, or
-fault in a computer program or system that causes it to produce an
-incorrect or unexpected result, or to behave in unintended ways''. So when
-you see that a program is crashing, not reading your input correctly,
-giving the wrong results, or not writing your output correctly, you have
-found a bug. In such cases, it is best if you report the bug to the
-developers. The programs will also inform you if known impossible
-situations occur (which are caused by something unexpected) and will ask
-the users to report the bug issue.
+According to Wikipedia ``a software bug is an error, flaw, failure, or fault 
in a computer program or system that causes it to produce an incorrect or 
unexpected result, or to behave in unintended ways''.
+So when you see that a program is crashing, not reading your input correctly, 
giving the wrong results, or not writing your output correctly, you have found 
a bug.
+In such cases, it is best if you report the bug to the developers.
+The programs will also inform you if known impossible situations occur (which 
are caused by something unexpected) and will ask the users to report the bug 
issue.
 
 @cindex Bug reporting
-Prior to actually filing a bug report, it is best to search previous
-reports. The issue might have already been found and even solved. The best
-place to check if your bug has already been discussed is the bugs tracker
-on @ref{Gnuastro project webpage} at
-@url{https://savannah.gnu.org/bugs/?group=gnuastro}. In the top search
-fields (under ``Display Criteria'') set the ``Open/Closed'' drop-down menu
-to ``Any'' and choose the respective program or general category of the bug
-in ``Category'' and click the ``Apply'' button. The results colored green
-have already been solved and the status of those colored in red is shown in
-the table.
+Prior to actually filing a bug report, it is best to search previous reports.
+The issue might have already been found and even solved.
+The best place to check if your bug has already been discussed is the bugs 
tracker on @ref{Gnuastro project webpage} at 
@url{https://savannah.gnu.org/bugs/?group=gnuastro}.
+In the top search fields (under ``Display Criteria'') set the ``Open/Closed'' 
drop-down menu to ``Any'' and choose the respective program or general category 
of the bug in ``Category'' and click the ``Apply'' button.
+The results colored green have already been solved and the status of those 
colored in red is shown in the table.
 
 @cindex Version control
-Recently corrected bugs are probably not yet publicly released because
-they are scheduled for the next Gnuastro stable release. If the bug is
-solved but not yet released and it is an urgent issue for you, you can
-get the version controlled source and compile that, see @ref{Version
-controlled source}.
-
-To solve the issue as readily as possible, please follow the following to
-guidelines in your bug report. The
-@url{http://www.chiark.greenend.org.uk/~sgtatham/bugs.html, How to Report
-Bugs Effectively} and @url{http://catb.org/~esr/faqs/smart-questions.html,
-How To Ask Questions The Smart Way} essays also provide some good generic
-advice for all software (don't contact their authors for Gnuastro's
-problems). Mastering the art of giving good bug reports (like asking good
-questions) can greatly enhance your experience with any free and open
-source software. So investing the time to read through these essays will
-greatly reduce your frustration after you see something doesn't work the
-way you feel it is supposed to for a large range of software, not just
-Gnuastro.
+Recently corrected bugs are probably not yet publicly released because they 
are scheduled for the next Gnuastro stable release.
+If the bug is solved but not yet released and it is an urgent issue for you, 
you can get the version controlled source and compile that, see @ref{Version 
controlled source}.
+
+To solve the issue as readily as possible, please follow the following to 
guidelines in your bug report.
+The @url{http://www.chiark.greenend.org.uk/~sgtatham/bugs.html, How to Report 
Bugs Effectively} and @url{http://catb.org/~esr/faqs/smart-questions.html, How 
To Ask Questions The Smart Way} essays also provide some good generic advice 
for all software (don't contact their authors for Gnuastro's problems).
+Mastering the art of giving good bug reports (like asking good questions) can 
greatly enhance your experience with any free and open source software.
+So investing the time to read through these essays will greatly reduce your 
frustration after you see something doesn't work the way you feel it is 
supposed to for a large range of software, not just Gnuastro.
 
 @table @strong
 
 @item Be descriptive
-Please provide as many details as possible and be very descriptive. Explain
-what you expected and what the output was: it might be that your
-expectation was wrong. Also please clearly state which sections of the
-Gnuastro book (this book), or other references you have studied to
-understand the problem. This can be useful in correcting the book (adding
-links to likely places where users will check). But more importantly, it
-will be encouraging for the developers, since you are showing how serious
-you are about the problem and that you have actually put some thought into
-it. ``To be able to ask a question clearly is two-thirds of the way to
-getting it answered.'' -- John Ruskin (1819-1900).
+Please provide as many details as possible and be very descriptive.
+Explain what you expected and what the output was: it might be that your 
expectation was wrong.
+Also please clearly state which sections of the Gnuastro book (this book), or 
other references you have studied to understand the problem.
+This can be useful in correcting the book (adding links to likely places where 
users will check).
+But more importantly, it will be encouraging for the developers, since you are 
showing how serious you are about the problem and that you have actually put 
some thought into it.
+``To be able to ask a question clearly is two-thirds of the way to getting it 
answered.'' -- John Ruskin (1819-1900).
 
 @item Individual and independent bug reports
-If you have found multiple bugs, please send them as separate (and
-independent) bugs (as much as possible). This will significantly help
-us in managing and resolving them sooner.
+If you have found multiple bugs, please send them as separate (and 
independent) bugs (as much as possible).
+This will significantly help us in managing and resolving them sooner.
 
 @cindex Reproducible bug reports
 @item Reproducible bug reports
-If we cannot exactly reproduce your bug, then it is very hard to resolve
-it. So please send us a Minimal working
-example@footnote{@url{http://en.wikipedia.org/wiki/Minimal_Working_Example}}
-along with the description. For example in running a program, please send
-us the full command-line text and the output with the @option{-P} option,
-see @ref{Operating mode options}. If it is caused only for a certain input,
-also send us that input file. In case the input FITS is large, please use
-Crop to only crop the problematic section and make it as small as possible
-so it can easily be uploaded and downloaded and not waste the archive's
-storage, see @ref{Crop}.
+If we cannot exactly reproduce your bug, then it is very hard to resolve it.
+So please send us a Minimal working 
example@footnote{@url{http://en.wikipedia.org/wiki/Minimal_Working_Example}} 
along with the description.
+For example in running a program, please send us the full command-line text 
and the output with the @option{-P} option, see @ref{Operating mode options}.
+If it is caused only for a certain input, also send us that input file.
+In case the input FITS is large, please use Crop to only crop the problematic 
section and make it as small as possible so it can easily be uploaded and 
downloaded and not waste the archive's storage, see @ref{Crop}.
 @end table
 
 @noindent
@@ -1567,33 +1247,28 @@ There are generally two ways to inform us of bugs:
 @cindex Mailing list: bug-gnuastro
 @cindex @code{bug-gnuastro@@gnu.org}
 @item
-Send a mail to @code{bug-gnuastro@@gnu.org}. Any mail you send to this
-address will be distributed through the bug-gnuastro mailing
-list@footnote{@url{https://lists.gnu.org/mailman/listinfo/bug-gnuastro}}. This
-is the simplest way to send us bug reports. The developers will then
-register the bug into the project webpage (next choice) for you.
+Send a mail to @code{bug-gnuastro@@gnu.org}.
+Any mail you send to this address will be distributed through the bug-gnuastro 
mailing 
list@footnote{@url{https://lists.gnu.org/mailman/listinfo/bug-gnuastro}}.
+This is the simplest way to send us bug reports.
+The developers will then register the bug into the project webpage (next 
choice) for you.
 
 @cindex Gnuastro project page
 @cindex Support request manager
 @cindex Submit new tracker item
 @cindex Anonymous bug submission
 @item
-Use the Gnuastro project webpage at
-@url{https://savannah.gnu.org/projects/gnuastro/}: There are two ways
-to get to the submission page as listed below. Fill in the form as
-described below and submit it (see @ref{Gnuastro project webpage} for
-more on the project webpage).
+Use the Gnuastro project webpage at 
@url{https://savannah.gnu.org/projects/gnuastro/}: There are two ways to get to 
the submission page as listed below.
+Fill in the form as described below and submit it (see @ref{Gnuastro project 
webpage} for more on the project webpage).
 
 @itemize
 
 @item
-Using the top horizontal menu items, immediately under the top page
-title. Hovering your mouse on ``Support'' will open a drop-down
-list. Select ``Submit new''.
+Using the top horizontal menu items, immediately under the top page title.
+Hovering your mouse on ``Support'' will open a drop-down list.
+Select ``Submit new''.
 
 @item
-In the main body of the page, under the ``Communication tools''
-section, click on ``Submit new item''.
+In the main body of the page, under the ``Communication tools'' section, click 
on ``Submit new item''.
 
 @end itemize
 @end itemize
@@ -1602,15 +1277,10 @@ section, click on ``Submit new item''.
 @cindex Bug tracker
 @cindex Task tracker
 @cindex Viewing trackers
-Once the items have been registered in the mailing list or webpage,
-the developers will add it to either the ``Bug Tracker'' or ``Task
-Manager'' trackers of the Gnuastro project webpage. These two trackers
-can only be edited by the Gnuastro project developers, but they can be
-browsed by anyone, so you can follow the progress on your bug. You are
-most welcome to join us in developing Gnuastro and fixing the bug you
-have found maybe a good starting point. Gnuastro is designed to be
-easy for anyone to develop (see @ref{Science and its tools}) and there
-is a full chapter devoted to developing it: @ref{Developing}.
+Once the items have been registered in the mailing list or webpage, the 
developers will add it to either the ``Bug Tracker'' or ``Task Manager'' 
trackers of the Gnuastro project webpage.
+These two trackers can only be edited by the Gnuastro project developers, but 
they can be browsed by anyone, so you can follow the progress on your bug.
+You are most welcome to join us in developing Gnuastro and fixing the bug you 
have found maybe a good starting point.
+Gnuastro is designed to be easy for anyone to develop (see @ref{Science and 
its tools}) and there is a full chapter devoted to developing it: 
@ref{Developing}.
 
 
 @node Suggest new feature, Announcements, Report a bug, Introduction
@@ -1618,52 +1288,30 @@ is a full chapter devoted to developing it: 
@ref{Developing}.
 
 @cindex Feature requests
 @cindex Additions to Gnuastro
-We would always be happy to hear of suggested new features. For every
-program there are already lists of features that we are planning to
-add. You can see the current list of plans from the Gnuastro project
-webpage at @url{https://savannah.gnu.org/projects/gnuastro/} and following
-@clicksequence{``Tasks''@click{}``Browse''} on the horizontal menu at the
-top of the page immediately under the title, see @ref{Gnuastro project
-webpage}. If you want to request a feature to an existing program, click on
-the ``Display Criteria'' above the list and under ``Category'', choose that
-particular program. Under ``Category'' you can also see the existing
-suggestions for new programs or other cases like installation,
-documentation or libraries. Also be sure to set the ``Open/Closed'' value
-to ``Any''.
-
-If the feature you want to suggest is not already listed in the task
-manager, then follow the steps that are fully described in @ref{Report a
-bug}. Please have in mind that the developers are all busy with their own
-astronomical research, and implementing existing ``task''s to add or
-resolving bugs. Gnuastro is a volunteer effort and none of the developers
-are paid for their hard work. So, although we will try our best, please
-don't not expect that your suggested feature be immediately included (with
-the next release of Gnuastro).
-
-The best person to apply the exciting new feature you have in mind is
-you, since you have the motivation and need. In fact Gnuastro is
-designed for making it as easy as possible for you to hack into it
-(add new features, change existing ones and so on), see @ref{Science
-and its tools}. Please have a look at the chapter devoted to
-developing (@ref{Developing}) and start applying your desired
-feature. Once you have added it, you can use it for your own work and
-if you feel you want others to benefit from your work, you can request
-for it to become part of Gnuastro. You can then join the developers
-and start maintaining your own part of Gnuastro. If you choose to take
-this path of action please contact us before hand (@ref{Report a bug})
-so we can avoid possible duplicate activities and get interested
-people in contact.
+We would always be happy to hear of suggested new features.
+For every program there are already lists of features that we are planning to 
add.
+You can see the current list of plans from the Gnuastro project webpage at 
@url{https://savannah.gnu.org/projects/gnuastro/} and following 
@clicksequence{``Tasks''@click{}``Browse''} on the horizontal menu at the top 
of the page immediately under the title, see @ref{Gnuastro project webpage}.
+If you want to request a feature to an existing program, click on the 
``Display Criteria'' above the list and under ``Category'', choose that 
particular program.
+Under ``Category'' you can also see the existing suggestions for new programs 
or other cases like installation, documentation or libraries.
+Also be sure to set the ``Open/Closed'' value to ``Any''.
+
+If the feature you want to suggest is not already listed in the task manager, 
then follow the steps that are fully described in @ref{Report a bug}.
+Please have in mind that the developers are all busy with their own 
astronomical research, and implementing existing ``task''s to add or resolving 
bugs.
+Gnuastro is a volunteer effort and none of the developers are paid for their 
hard work.
+So, although we will try our best, please don't not expect that your suggested 
feature be immediately included (with the next release of Gnuastro).
+
+The best person to apply the exciting new feature you have in mind is you, 
since you have the motivation and need.
+In fact Gnuastro is designed for making it as easy as possible for you to hack 
into it (add new features, change existing ones and so on), see @ref{Science 
and its tools}.
+Please have a look at the chapter devoted to developing (@ref{Developing}) and 
start applying your desired feature.
+Once you have added it, you can use it for your own work and if you feel you 
want others to benefit from your work, you can request for it to become part of 
Gnuastro.
+You can then join the developers and start maintaining your own part of 
Gnuastro.
+If you choose to take this path of action please contact us before hand 
(@ref{Report a bug}) so we can avoid possible duplicate activities and get 
interested people in contact.
 
 @cartouche
 @noindent
-@strong{Gnuastro is a collection of low level programs:} As described in
-@ref{Program design philosophy}, a founding principle of Gnuastro is that
-each library or program should be basic and low-level. High level jobs
-should be done by running the separate programs or using separate functions
-in succession through a shell script or calling the libraries by higher
-level functions, see the examples in @ref{Tutorials}. So when making the
-suggestions please consider how your desired job can best be broken into
-separate steps and modularized.
+@strong{Gnuastro is a collection of low level programs:} As described in 
@ref{Program design philosophy}, a founding principle of Gnuastro is that each 
library or program should be basic and low-level.
+High level jobs should be done by running the separate programs or using 
separate functions in succession through a shell script or calling the 
libraries by higher level functions, see the examples in @ref{Tutorials}.
+So when making the suggestions please consider how your desired job can best 
be broken into separate steps and modularized.
 @end cartouche
 
 
@@ -1673,19 +1321,15 @@ separate steps and modularized.
 
 @cindex Announcements
 @cindex Mailing list: info-gnuastro
-Gnuastro has a dedicated mailing list for making announcements
-(@code{info-gnuastro}). Anyone can subscribe to this mailing list. Anytime
-there is a new stable or test release, an email will be circulated
-there. The email contains a summary of the overall changes along with a
-detailed list (from the @file{NEWS} file). This mailing list is thus the
-best way to stay up to date with new releases, easily learn about the
-updated/new features, or dependencies (see @ref{Dependencies}).
+Gnuastro has a dedicated mailing list for making announcements 
(@code{info-gnuastro}).
+Anyone can subscribe to this mailing list.
+Anytime there is a new stable or test release, an email will be circulated 
there.
+The email contains a summary of the overall changes along with a detailed list 
(from the @file{NEWS} file).
+This mailing list is thus the best way to stay up to date with new releases, 
easily learn about the updated/new features, or dependencies (see 
@ref{Dependencies}).
 
-To subscribe to this list, please visit
-@url{https://lists.gnu.org/mailman/listinfo/info-gnuastro}. Traffic (number
-of mails per unit time) in this list is designed to be low: only a handful
-of mails per year. Previous announcements are available on
-@url{http://lists.gnu.org/archive/html/info-gnuastro/, its archive}.
+To subscribe to this list, please visit 
@url{https://lists.gnu.org/mailman/listinfo/info-gnuastro}.
+Traffic (number of mails per unit time) in this list is designed to be low: 
only a handful of mails per year.
+Previous announcements are available on 
@url{http://lists.gnu.org/archive/html/info-gnuastro/, its archive}.
 
 
 
@@ -1697,29 +1341,19 @@ In this book we have the following conventions:
 @itemize
 
 @item
-All commands that are to be run on the shell (command-line) prompt as the
-user start with a @command{$}. In case they must be run as a super-user or
-system administrator, they will start with a single @command{#}. If the
-command is in a separate line and next line @code{is also in the code type
-face}, but doesn't have any of the @command{$} or @command{#} signs, then
-it is the output of the command after it is run. As a user, you don't need
-to type those lines. A line that starts with @command{##} is just a comment
-for explaining the command to a human reader and must not be typed.
+All commands that are to be run on the shell (command-line) prompt as the user 
start with a @command{$}.
+In case they must be run as a super-user or system administrator, they will 
start with a single @command{#}.
+If the command is in a separate line and next line @code{is also in the code 
type face}, but doesn't have any of the @command{$} or @command{#} signs, then 
it is the output of the command after it is run.
+As a user, you don't need to type those lines.
+A line that starts with @command{##} is just a comment for explaining the 
command to a human reader and must not be typed.
 
 @item
-If the command becomes larger than the page width a @key{\} is
-inserted in the code. If you are typing the code by hand on the
-command-line, you don't need to use multiple lines or add the extra
-space characters, so you can omit them. If you want to copy and paste
-these examples (highly discouraged!) then the @key{\} should stay.
-
-The @key{\} character is a shell escape character which is used
-commonly to make characters which have special meaning for the shell
-loose that special place (the shell will not treat them specially if
-there is a @key{\} behind them). When it is a last character in a line
-(the next character is a new-line character) the new-line character
-looses its meaning an the shell sees it as a simple white-space
-character, enabling you to use multiple lines to write your commands.
+If the command becomes larger than the page width a @key{\} is inserted in the 
code.
+If you are typing the code by hand on the command-line, you don't need to use 
multiple lines or add the extra space characters, so you can omit them.
+If you want to copy and paste these examples (highly discouraged!) then the 
@key{\} should stay.
+
+The @key{\} character is a shell escape character which is used commonly to 
make characters which have special meaning for the shell loose that special 
place (the shell will not treat them specially if there is a @key{\} behind 
them).
+When it is a last character in a line (the next character is a new-line 
character) the new-line character looses its meaning an the shell sees it as a 
simple white-space character, enabling you to use multiple lines to write your 
commands.
 
 @end itemize
 
@@ -1728,76 +1362,98 @@ character, enabling you to use multiple lines to write 
your commands.
 @node Acknowledgments,  , Conventions, Introduction
 @section Acknowledgments
 
-Gnuastro would not have been possible without scholarships and grants from
-several funding institutions. We thus ask that if you used Gnuastro in any
-of your papers/reports, please add the proper citation and acknowledge the
-funding agencies/projects. For details of which papers to cite (may be
-different for different programs) and get the acknowledgment statement to
-include in your paper, please run the relevant programs with the common
-@option{--cite} option like the example commands below (for more on
-@option{--cite}, please see @ref{Operating mode options}).
+Gnuastro would not have been possible without scholarships and grants from 
several funding institutions.
+We thus ask that if you used Gnuastro in any of your papers/reports, please 
add the proper citation and acknowledge the funding agencies/projects.
+For details of which papers to cite (may be different for different programs) 
and get the acknowledgment statement to include in your paper, please run the 
relevant programs with the common @option{--cite} option like the example 
commands below (for more on @option{--cite}, please see @ref{Operating mode 
options}).
 
 @example
 $ astnoisechisel --cite
 $ astmkcatalog --cite
 @end example
 
-Here, we'll acknowledge all the institutions (and their grants) along with
-the people who helped make Gnuastro possible. The full list of Gnuastro
-authors is available at the start of this book and the @file{AUTHORS} file
-in the source code (both are generated automatically from the version
-controlled history). The plain text file @file{THANKS}, which is also
-distributed along with the source code, contains the list of people and
-institutions who played an indirect role in Gnuastro (not committed any
-code in the Gnuastro version controlled history).
-
-The Japanese Ministry of Education, Culture, Sports, Science, and
-Technology (MEXT) scholarship for Mohammad Akhlaghi's Masters and PhD
-degree in Tohoku University Astronomical Institute had an instrumental role
-in the long term learning and planning that made the idea of Gnuastro
-possible. The very critical view points of Professor Takashi Ichikawa
-(Mohammad's adviser) were also instrumental in the initial ideas and
-creation of Gnuastro. Afterwards, the European Research Council (ERC)
-advanced grant 339659-MUSICOS (Principal investigator: Roland Bacon) was
-vital in the growth and expansion of Gnuastro. Working with Roland at the
-Centre de Recherche Astrophysique de Lyon (CRAL), enabled a thorough
-re-write of the core functionality of all libraries and programs, turning
-Gnuastro into the large collection of generic programs and libraries it is
-today.  Work on improving Gnuastro and making it mature is now continuing
-primarily in the Instituto de Astrofisica de Canarias (IAC) and in
-particular in collaboration with Johan Knapen and Ignacio Trujillo.
+Here, we'll acknowledge all the institutions (and their grants) along with the 
people who helped make Gnuastro possible.
+The full list of Gnuastro authors is available at the start of this book and 
the @file{AUTHORS} file in the source code (both are generated automatically 
from the version controlled history).
+The plain text file @file{THANKS}, which is also distributed along with the 
source code, contains the list of people and institutions who played an 
indirect role in Gnuastro (not committed any code in the Gnuastro version 
controlled history).
+
+The Japanese Ministry of Education, Culture, Sports, Science, and Technology 
(MEXT) scholarship for Mohammad Akhlaghi's Masters and PhD degree in Tohoku 
University Astronomical Institute had an instrumental role in the long term 
learning and planning that made the idea of Gnuastro possible.
+The very critical view points of Professor Takashi Ichikawa (Mohammad's 
adviser) were also instrumental in the initial ideas and creation of Gnuastro.
+Afterwards, the European Research Council (ERC) advanced grant 339659-MUSICOS 
(Principal investigator: Roland Bacon) was vital in the growth and expansion of 
Gnuastro.
+Working with Roland at the Centre de Recherche Astrophysique de Lyon (CRAL), 
enabled a thorough re-write of the core functionality of all libraries and 
programs, turning Gnuastro into the large collection of generic programs and 
libraries it is today.
+Work on improving Gnuastro and making it mature is now continuing primarily in 
the Instituto de Astrofisica de Canarias (IAC) and in particular in 
collaboration with Johan Knapen and Ignacio Trujillo.
 
 @c To the developers: please keep this in the same order as the THANKS file
 @c (alphabetical, except for the names in the paragraph above).
-In general, we would like to gratefully thank the following people for
-their useful and constructive comments and suggestions (in alphabetical
-order by family name): Valentina Abril-melgarejo, Marjan Akbari, Hamed
-Altafi, Roland Bacon, Roberto Baena Gall\'e, Zahra Bagheri, Karl Berry,
-Leindert Boogaard, Nicolas Bouch@'e, Fernando Buitrago, Adrian Bunk, Rosa
-Calvi, Nushkia Chamba, Benjamin Clement, Nima Dehdilani, Antonio Diaz Diaz,
-Pierre-Alain Duc, Elham Eftekhari, Gaspar Galaz, Th@'er@`ese Godefroy,
-Madusha Gunawardhana, Bruno Haible, Stephen Hamer, Takashi Ichikawa, Ra@'ul
-Infante Sainz, Brandon Invergo, Oryna Ivashtenko, Aur@'elien Jarno, Lee
-Kelvin, Brandon Kelly, Mohammad-Reza Khellat, Johan Knapen, Geoffry
-Krouchi, Floriane Leclercq, Alan Lefor, Guillaume Mahler, Juan Molina
-Tobar, Francesco Montanari, Dmitrii Oparin, Bertrand Pain, William Pence,
-Mamta Pommier, Bob Proulx, Teymoor Saifollahi, Elham Saremi, Yahya
-Sefidbakht, Alejandro Serrano Borlaff, Jenny Sorce, Lee Spitler, Richard
-Stallman, Michael Stein, Ole Streicher, Alfred M. Szmidt, Michel Tallon,
-Juan C. Tello, @'Eric Thi@'ebaut, Ignacio Trujillo, David Valls-Gabaud,
-Aaron Watkins, Christopher Willmer, Sara Yousefi Taemeh, Johannes Zabl. The
-GNU French Translation Team is also managing the French version of the top
-Gnuastro webpage which we highly appreciate. Finally we should thank all
-the (sometimes anonymous) people in various online forums which patiently
-answered all our small (but important) technical questions.
-
-All work on Gnuastro has been voluntary, but the authors are most grateful
-to the following institutions (in chronological order) for hosting us in
-our research. Where necessary, these institutions have disclaimed any
-ownership of the parts of Gnuastro that were developed there, thus insuring
-the freedom of Gnuastro for the future (see @ref{Copyright assignment}). We
-highly appreciate their support for free software, and thus free science,
-and therefore a free society.
+In general, we would like to gratefully thank the following people for their 
useful and constructive comments and suggestions (in alphabetical order by 
family name):
+Valentina Abril-melgarejo,
+Marjan Akbari,
+Hamed Altafi,
+Roland Bacon,
+Roberto Baena Gall\'e,
+Zahra Bagheri,
+Karl Berry,
+Leindert Boogaard,
+Nicolas Bouch@'e,
+Fernando Buitrago,
+Adrian Bunk,
+Rosa Calvi,
+Nushkia Chamba,
+Benjamin Clement,
+Nima Dehdilani,
+Antonio Diaz Diaz,
+Pierre-Alain Duc,
+Elham Eftekhari,
+Gaspar Galaz,
+Th@'er@`ese Godefroy,
+Madusha Gunawardhana,
+Bruno Haible,
+Stephen Hamer,
+Takashi Ichikawa,
+Ra@'ul Infante Sainz,
+Brandon Invergo,
+Oryna Ivashtenko,
+Aur@'elien Jarno,
+Lee Kelvin,
+Brandon Kelly,
+Mohammad-Reza Khellat,
+Johan Knapen,
+Geoffry Krouchi,
+Floriane Leclercq,
+Alan Lefor,
+Guillaume Mahler,
+Juan Molina Tobar,
+Francesco Montanari,
+Dmitrii Oparin,
+Bertrand Pain,
+William Pence,
+Mamta Pommier,
+Bob Proulx,
+Teymoor Saifollahi,
+Elham Saremi,
+Yahya Sefidbakht,
+Alejandro Serrano Borlaff,
+Zahra Sharbaf,
+Jenny Sorce,
+Lee Spitler,
+Richard Stallman,
+Michael Stein,
+Ole Streicher,
+Alfred M. Szmidt,
+Michel Tallon,
+Juan C. Tello,
+@'Eric Thi@'ebaut,
+Ignacio Trujillo,
+David Valls-Gabaud,
+Aaron Watkins,
+Michael H.F. Wilkinson,
+Christopher Willmer,
+Sara Yousefi Taemeh,
+Johannes Zabl.
+The GNU French Translation Team is also managing the French version of the top 
Gnuastro webpage which we highly appreciate.
+Finally we should thank all the (sometimes anonymous) people in various online 
forums which patiently answered all our small (but imporant) technical 
questions.
+
+All work on Gnuastro has been voluntary, but the authors are most grateful to 
the following institutions (in chronological order) for hosting us in our 
research.
+Where necessary, these institutions have disclaimed any ownership of the parts 
of Gnuastro that were developed there, thus insuring the freedom of Gnuastro 
for the future (see @ref{Copyright assignment}).
+We highly appreciate their support for free software, and thus free science, 
and therefore a free society.
 
 @quotation
 Tohoku University Astronomical Institute, Sendai, Japan.@*
@@ -1823,79 +1479,41 @@ Instituto de Astrofisica de Canarias (IAC), Tenerife, 
Spain.@*
 
 @cindex Tutorial
 @cindex Cookbook
-To help new users have a smooth and easy start with Gnuastro, in this
-chapter several thoroughly elaborated tutorials, or cookbooks, are
-provided. These tutorials demonstrate the capabilities of different
-Gnuastro programs and libraries, along with tips and guidelines for the
-best practices of using them in various realistic situations.
-
-We strongly recommend going through these tutorials to get a good feeling
-of how the programs are related (built in a modular design to be used
-together in a pipeline), very similar to the core Unix-based programs that
-they were modeled on. Therefore these tutorials will greatly help in
-optimally using Gnuastro's programs (and generally, the Unix-like
-command-line environment) effectively for your research.
-
-In @ref{Sufi simulates a detection}, we'll start with a
-fictional@footnote{The two historically motivated tutorials (@ref{Sufi
-simulates a detection} is not intended to be a historical reference (the
-historical facts of this fictional tutorial used Wikipedia as a
-reference). This form of presenting a tutorial was influenced by the
-PGF/TikZ and Beamer manuals. They are both packages in in @TeX{} and
-@LaTeX{}, the first is a high-level vector graphic programming environment,
-while with the second you can make presentation slides. On a similar topic,
-there are also some nice words of wisdom for Unix-like systems called
-@url{http://catb.org/esr/writings/unix-koans, Rootless Root}. These also
-have a similar style but they use a mythical figure named Master Foo. If
-you already have some experience in Unix-like systems, you will definitely
-find these Unix Koans entertaining/educative.} tutorial explaining how Abd
-al-rahman Sufi (903 -- 986 A.D., the first recorded description of
-``nebulous'' objects in the heavens is attributed to him) could have used
-some of Gnuastro's programs for a realistic simulation of his observations
-and see if his detection of nebulous objects was trust-able. Because all
-conditions are under control in a simulated/mock environment/dataset, mock
-datasets can be a valuable tool to inspect the limitations of your data
-analysis and processing. But they need to be as realistic as possible, so
-the first tutorial is dedicated to this important step of an analysis.
-
-The next two tutorials (@ref{General program usage tutorial} and
-@ref{Detecting large extended targets}) use real input datasets from some
-of the deep Hubble Space Telescope (HST) images and the Sloan Digital Sky
-Survey (SDSS) respectively. Their aim is to demonstrate some real-world
-problems that many astronomers often face and how they can be be solved
-with Gnuastro's programs.
-
-The ultimate aim of @ref{General program usage tutorial} is to detect
-galaxies in a deep HST image, measure their positions and brightness and
-select those with the strongest colors. In the process, it takes many
-detours to introduce you to the useful capabilities of many of the
-programs. So please be patient in reading it. If you don't have much time
-and can only try one of the tutorials, we recommend this one.
+To help new users have a smooth and easy start with Gnuastro, in this chapter 
several thoroughly elaborated tutorials, or cookbooks, are provided.
+These tutorials demonstrate the capabilities of different Gnuastro programs 
and libraries, along with tips and guidelines for the best practices of using 
them in various realistic situations.
+
+We strongly recommend going through these tutorials to get a good feeling of 
how the programs are related (built in a modular design to be used together in 
a pipeline), very similar to the core Unix-based programs that they were 
modeled on.
+Therefore these tutorials will greatly help in optimally using Gnuastro's 
programs (and generally, the Unix-like command-line environment) effectively 
for your research.
+
+In @ref{Sufi simulates a detection}, we'll start with a fictional@footnote{The 
two historically motivated tutorials (@ref{Sufi simulates a detection} is not 
intended to be a historical reference (the historical facts of this fictional 
tutorial used Wikipedia as a reference).
+This form of presenting a tutorial was influenced by the PGF/TikZ and Beamer 
manuals.
+They are both packages in in @TeX{} and @LaTeX{}, the first is a high-level 
vector graphic programming environment, while with the second you can make 
presentation slides.
+On a similar topic, there are also some nice words of wisdom for Unix-like 
systems called @url{http://catb.org/esr/writings/unix-koans, Rootless Root}.
+These also have a similar style but they use a mythical figure named Master 
Foo.
+If you already have some experience in Unix-like systems, you will definitely 
find these Unix Koans entertaining/educative.} tutorial explaining how Abd 
al-rahman Sufi (903 -- 986 A.D., the first recorded description of ``nebulous'' 
objects in the heavens is attributed to him) could have used some of Gnuastro's 
programs for a realistic simulation of his observations and see if his 
detection of nebulous objects was trust-able.
+Because all conditions are under control in a simulated/mock 
environment/dataset, mock datasets can be a valuable tool to inspect the 
limitations of your data analysis and processing.
+But they need to be as realistic as possible, so the first tutorial is 
dedicated to this important step of an analysis.
+
+The next two tutorials (@ref{General program usage tutorial} and 
@ref{Detecting large extended targets}) use real input datasets from some of 
the deep Hubble Space Telescope (HST) images and the Sloan Digital Sky Survey 
(SDSS) respectively.
+Their aim is to demonstrate some real-world problems that many astronomers 
often face and how they can be be solved with Gnuastro's programs.
+
+The ultimate aim of @ref{General program usage tutorial} is to detect galaxies 
in a deep HST image, measure their positions and brightness and select those 
with the strongest colors.
+In the process, it takes many detours to introduce you to the useful 
capabilities of many of the programs.
+So please be patient in reading it.
+If you don't have much time and can only try one of the tutorials, we 
recommend this one.
 
 @cindex PSF
 @cindex Point spread function
-@ref{Detecting large extended targets} deals with a major problem in
-astronomy: effectively detecting the faint outer wings of bright (and
-large) nearby galaxies to extremely low surface brightness levels (roughly
-one quarter of the local noise level in the example discussed). Besides the
-interesting scientific questions in these low-surface brightness features,
-failure to properly detect them will bias the measurements of the
-background objects and the survey's noise estimates. This is an important
-issue, especially in wide surveys. Because bright/large galaxies and
-stars@footnote{Stars also have similarly large and extended wings due to
-the point spread function, see @ref{PSF}.}, cover a significant fraction of
-the survey area.
-
-In these tutorials, we have intentionally avoided too many cross references
-to make it more easy to read. For more information about a particular
-program, you can visit the section with the same name as the program in
-this book. Each program section in the subsequent chapters starts by
-explaining the general concepts behind what it does, for example see
-@ref{Convolve}. If you only want practical information on running a
-program, for example its options/configuration, input(s) and output(s),
-please consult the subsection titled ``Invoking ProgramName'', for example
-see @ref{Invoking astnoisechisel}. For an explanation of the conventions we
-use in the example codes through the book, please see @ref{Conventions}.
+@ref{Detecting large extended targets} deals with a major problem in 
astronomy: effectively detecting the faint outer wings of bright (and large) 
nearby galaxies to extremely low surface brightness levels (roughly one quarter 
of the local noise level in the example discussed).
+Besides the interesting scientific questions in these low-surface brightness 
features, failure to properly detect them will bias the measurements of the 
background objects and the survey's noise estimates.
+This is an important issue, especially in wide surveys.
+Because bright/large galaxies and stars@footnote{Stars also have similarly 
large and extended wings due to the point spread function, see @ref{PSF}.}, 
cover a significant fraction of the survey area.
+
+In these tutorials, we have intentionally avoided too many cross references to 
make it more easy to read.
+For more information about a particular program, you can visit the section 
with the same name as the program in this book.
+Each program section in the subsequent chapters starts by explaining the 
general concepts behind what it does, for example see @ref{Convolve}.
+If you only want practical information on running a program, for example its 
options/configuration, input(s) and output(s), please consult the subsection 
titled ``Invoking ProgramName'', for example see @ref{Invoking astnoisechisel}.
+For an explanation of the conventions we use in the example codes through the 
book, please see @ref{Conventions}.
 
 @menu
 * Sufi simulates a detection::  Simulating a detection.
@@ -1910,82 +1528,56 @@ use in the example codes through the book, please see 
@ref{Conventions}.
 @cindex Azophi
 @cindex Abd al-rahman Sufi
 @cindex Sufi, Abd al-rahman
-It is the year 953 A.D. and Abd al-rahman Sufi (903 -- 986
-A.D.)@footnote{In Latin Sufi is known as Azophi. He was an Iranian
-astronomer. His manuscript ``Book of fixed stars'' contains the first
-recorded observations of the Andromeda galaxy, the Large Magellanic Cloud
-and seven other non-stellar or `nebulous' objects.}  is in Shiraz as a
-guest astronomer. He had come there to use the advanced 123 centimeter
-astrolabe for his studies on the Ecliptic. However, something was bothering
-him for a long time. While mapping the constellations, there were several
-non-stellar objects that he had detected in the sky, one of them was in the
-Andromeda constellation. During a trip he had to Yemen, Sufi had seen
-another such object in the southern skies looking over the Indian ocean. He
-wasn't sure if such cloud-like non-stellar objects (which he was the first
-to call `Sah@={a}bi' in Arabic or `nebulous') were real astronomical
-objects or if they were only the result of some bias in his
-observations. Could such diffuse objects actually be detected at all with
-his detection technique?
+It is the year 953 A.D. and Abd al-rahman Sufi (903 -- 986 A.D.)@footnote{In 
Latin Sufi is known as Azophi.
+He was an Iranian astronomer.
+His manuscript ``Book of fixed stars'' contains the first recorded 
observations of the Andromeda galaxy, the Large Magellanic Cloud and seven 
other non-stellar or `nebulous' objects.}  is in Shiraz as a guest astronomer.
+He had come there to use the advanced 123 centimeter astrolabe for his studies 
on the Ecliptic.
+However, something was bothering him for a long time.
+While mapping the constellations, there were several non-stellar objects that 
he had detected in the sky, one of them was in the Andromeda constellation.
+During a trip he had to Yemen, Sufi had seen another such object in the 
southern skies looking over the Indian ocean.
+He wasn't sure if such cloud-like non-stellar objects (which he was the first 
to call `Sah@={a}bi' in Arabic or `nebulous') were real astronomical objects or 
if they were only the result of some bias in his observations.
+Could such diffuse objects actually be detected at all with his detection 
technique?
 
 @cindex Almagest
 @cindex Claudius Ptolemy
 @cindex Ptolemy, Claudius
-He still had a few hours left until nightfall (when he would continue
-his studies on the ecliptic) so he decided to find an answer to this
-question. He had thoroughly studied Claudius Ptolemy's (90 -- 168 A.D)
-Almagest and had made lots of corrections to it, in particular in
-measuring the brightness. Using his same experience, he was able to
-measure a magnitude for the objects and wanted to simulate his
-observation to see if a simulated object with the same brightness and
-size could be detected in a simulated noise with the same detection
-technique.  The general outline of the steps he wants to take are:
+He still had a few hours left until nightfall (when he would continue his 
studies on the ecliptic) so he decided to find an answer to this question.
+He had thoroughly studied Claudius Ptolemy's (90 -- 168 A.D) Almagest and had 
made lots of corrections to it, in particular in measuring the brightness.
+Using his same experience, he was able to measure a magnitude for the objects 
and wanted to simulate his observation to see if a simulated object with the 
same brightness and size could be detected in a simulated noise with the same 
detection technique.
+The general outline of the steps he wants to take are:
 
 @enumerate
 
 @item
-Make some mock profiles in an over-sampled image. The initial mock
-image has to be over-sampled prior to convolution or other forms of
-transformation in the image. Through his experiences, Sufi knew that
-this is because the image of heavenly bodies is actually transformed
-by the atmosphere or other sources outside the atmosphere (for example
-gravitational lenses) prior to being sampled on an image. Since that
-transformation occurs on a continuous grid, to best approximate it, he
-should do all the work on a finer pixel grid. In the end he can
-re-sample the result to the initially desired grid size.
+Make some mock profiles in an over-sampled image.
+The initial mock image has to be over-sampled prior to convolution or other 
forms of transformation in the image.
+Through his experiences, Sufi knew that this is because the image of heavenly 
bodies is actually transformed by the atmosphere or other sources outside the 
atmosphere (for example gravitational lenses) prior to being sampled on an 
image.
+Since that transformation occurs on a continuous grid, to best approximate it, 
he should do all the work on a finer pixel grid.
+In the end he can re-sample the result to the initially desired grid size.
 
 @item
 @cindex PSF
-Convolve the image with a point spread function (PSF, see @ref{PSF}) that
-is over-sampled to the same resolution as the mock image. Since he wants to
-finish in a reasonable time and the PSF kernel will be very large due to
-oversampling, he has to use frequency domain convolution which has the side
-effect of dimming the edges of the image. So in the first step above he
-also has to build the image to be larger by at least half the width of the
-PSF convolution kernel on each edge.
+Convolve the image with a point spread function (PSF, see @ref{PSF}) that is 
over-sampled to the same resolution as the mock image.
+Since he wants to finish in a reasonable time and the PSF kernel will be very 
large due to oversampling, he has to use frequency domain convolution which has 
the side effect of dimming the edges of the image.
+So in the first step above he also has to build the image to be larger by at 
least half the width of the PSF convolution kernel on each edge.
 
 @item
-With all the transformations complete, the image should be re-sampled
-to the same size of the pixels in his detector.
+With all the transformations complete, the image should be re-sampled to the 
same size of the pixels in his detector.
 
 @item
-He should remove those extra pixels on all edges to remove frequency
-domain convolution artifacts in the final product.
+He should remove those extra pixels on all edges to remove frequency domain 
convolution artifacts in the final product.
 
 @item
-He should add noise to the (until now, noise-less) mock image. After
-all, all observations have noise associated with them.
+He should add noise to the (until now, noise-less) mock image.
+After all, all observations have noise associated with them.
 
 @end enumerate
 
-Fortunately Sufi had heard of GNU Astronomy Utilities from a colleague in
-Isfahan (where he worked) and had installed it on his computer a year
-before. It had tools to do all the steps above. He had used MakeProfiles
-before, but wasn't sure which columns he had chosen in his user or system
-wide configuration files for which parameters, see @ref{Configuration
-files}. So to start his simulation, Sufi runs MakeProfiles with the
-@option{-P} option to make sure what columns in a catalog MakeProfiles
-currently recognizes and the output image parameters. In particular, Sufi
-is interested in the recognized columns (shown below).
+Fortunately Sufi had heard of GNU Astronomy Utilities from a colleague in 
Isfahan (where he worked) and had installed it on his computer a year before.
+It had tools to do all the steps above.
+He had used MakeProfiles before, but wasn't sure which columns he had chosen 
in his user or system wide configuration files for which parameters, see 
@ref{Configuration files}.
+So to start his simulation, Sufi runs MakeProfiles with the @option{-P} option 
to make sure what columns in a catalog MakeProfiles currently recognizes and 
the output image parameters.
+In particular, Sufi is interested in the recognized columns (shown below).
 
 @example
 $ astmkprof -P
@@ -2016,46 +1608,23 @@ $ astmkprof -P
 @end example
 
 @noindent
-In Gnuastro, column counting starts from 1, so the columns are ordered such
-that the first column (number 1) can be an ID he specifies for each object
-(and MakeProfiles ignores), each subsequent column is used for another
-property of the profile. It is also possible to use column names for the
-values of these options and change these defaults, but Sufi preferred to
-stick to the defaults. Fortunately MakeProfiles has the capability to also
-make the PSF which is to be used on the mock image and using the
-@option{--prepforconv} option, he can also make the mock image to be larger
-by the correct amount and all the sources to be shifted by the correct
-amount.
-
-For his initial check he decides to simulate the nebula in the Andromeda
-constellation. The night he was observing, the PSF had roughly a FWHM of
-about 5 pixels, so as the first row (profile), he defines the PSF
-parameters and sets the radius column (@code{rcol} above, fifth column) to
-@code{5.000}, he also chooses a Moffat function for its functional
-form. Remembering how diffuse the nebula in the Andromeda constellation
-was, he decides to simulate it with a mock S@'{e}rsic index 1.0 profile. He
-wants the output to be 499 pixels by 499 pixels, so he can put the center
-of the mock profile in the centeral pixel of the image (note that an even
-number doesn't have a central element).
-
-Looking at his drawings of it, he decides a reasonable effective radius for
-it would be 40 pixels on this image pixel scale, he sets the axis ratio and
-position angle to approximately correct values too and finally he sets the
-total magnitude of the profile to 3.44 which he had accurately
-measured. Sufi also decides to truncate both the mock profile and PSF at 5
-times the respective radius parameters. In the end he decides to put four
-stars on the four corners of the image at very low magnitudes as a visual
-scale.
-
-Using all the information above, he creates the catalog of mock profiles he
-wants in a file named @file{cat.txt} (short for catalog) using his favorite
-text editor and stores it in a directory named @file{simulationtest} in his
-home directory. [The @command{cat} command prints the contents of a file,
-short for ``concatenation''. So please copy-paste the lines after
-``@command{cat cat.txt}'' into @file{cat.txt} when the editor opens in the
-steps above it, note that there are 7 lines, first one starting with
-@key{#}. Also be careful when copying from the PDF format, the Info, web,
-or text formats shouldn't have any problem]:
+In Gnuastro, column counting starts from 1, so the columns are ordered such 
that the first column (number 1) can be an ID he specifies for each object (and 
MakeProfiles ignores), each subsequent column is used for another property of 
the profile.
+It is also possible to use column names for the values of these options and 
change these defaults, but Sufi preferred to stick to the defaults.
+Fortunately MakeProfiles has the capability to also make the PSF which is to 
be used on the mock image and using the @option{--prepforconv} option, he can 
also make the mock image to be larger by the correct amount and all the sources 
to be shifted by the correct amount.
+
+For his initial check he decides to simulate the nebula in the Andromeda 
constellation.
+The night he was observing, the PSF had roughly a FWHM of about 5 pixels, so 
as the first row (profile), he defines the PSF parameters and sets the radius 
column (@code{rcol} above, fifth column) to @code{5.000}, he also chooses a 
Moffat function for its functional form.
+Remembering how diffuse the nebula in the Andromeda constellation was, he 
decides to simulate it with a mock S@'{e}rsic index 1.0 profile.
+He wants the output to be 499 pixels by 499 pixels, so he can put the center 
of the mock profile in the central pixel of the image (note that an even number 
doesn't have a central element).
+
+Looking at his drawings of it, he decides a reasonable effective radius for it 
would be 40 pixels on this image pixel scale, he sets the axis ratio and 
position angle to approximately correct values too and finally he sets the 
total magnitude of the profile to 3.44 which he had accurately measured.
+Sufi also decides to truncate both the mock profile and PSF at 5 times the 
respective radius parameters.
+In the end he decides to put four stars on the four corners of the image at 
very low magnitudes as a visual scale.
+
+Using all the information above, he creates the catalog of mock profiles he 
wants in a file named @file{cat.txt} (short for catalog) using his favorite 
text editor and stores it in a directory named @file{simulationtest} in his 
home directory.
+[The @command{cat} command prints the contents of a file, short for 
``concatenation''.
+So please copy-paste the lines after ``@command{cat cat.txt}'' into 
@file{cat.txt} when the editor opens in the steps above it, note that there are 
7 lines, first one starting with @key{#}.
+Also be careful when copying from the PDF format, the Info, web, or text 
formats shouldn't have any problem]:
 
 @example
 $ mkdir ~/simulationtest
@@ -2076,8 +1645,8 @@ $ cat cat.txt
 @end example
 
 @noindent
-The zero-point magnitude for his observation was 18. Now he has all the
-necessary parameters and runs MakeProfiles with the following command:
+The zero-point magnitude for his observation was 18.
+Now he has all the necessary parameters and runs MakeProfiles with the 
following command:
 
 @example
 
@@ -2102,28 +1671,21 @@ $ls
 
 @cindex Oversample
 @noindent
-The file @file{0_cat.fits} is the PSF Sufi had asked for and
-@file{cat.fits} is the image containing the other 5 objects. The PSF is now
-available to him as a separate file for the convolution step. While he was
-preparing the catalog, one of his students approached him and was also
-following the steps. When he opened the image, the student was surprised to
-see that all the stars are only one pixel and not in the shape of the PSF
-as we see when we image the sky at night. So Sufi explained to him that the
-stars will take the shape of the PSF after convolution and this is how they
-would look if we didn't have an atmosphere or an aperture when we took the
-image. The size of the image was also surprising for the student, instead
-of 499 by 499, it was 2615 by 2615 pixels (from the command below):
+The file @file{0_cat.fits} is the PSF Sufi had asked for and @file{cat.fits} 
is the image containing the other 5 objects.
+The PSF is now available to him as a separate file for the convolution step.
+While he was preparing the catalog, one of his students approached him and was 
also following the steps.
+When he opened the image, the student was surprised to see that all the stars 
are only one pixel and not in the shape of the PSF as we see when we image the 
sky at night.
+So Sufi explained to him that the stars will take the shape of the PSF after 
convolution and this is how they would look if we didn't have an atmosphere or 
an aperture when we took the image.
+The size of the image was also surprising for the student, instead of 499 by 
499, it was 2615 by 2615 pixels (from the command below):
 
 @example
 $ astfits cat.fits -h1 | grep NAXIS
 @end example
 
 @noindent
-So Sufi explained why oversampling is important for parts of the image
-where the flux change is significant over a pixel. Sufi then explained to
-him that after convolving we will re-sample the image to get our originally
-desired size/resolution. To convolve the image, Sufi ran the following
-command:
+So Sufi explained why oversampling is important for parts of the image where 
the flux change is significant over a pixel.
+Sufi then explained to him that after convolving we will re-sample the image 
to get our originally desired size/resolution.
+To convolve the image, Sufi ran the following command:
 
 @example
 $ astconvolve --kernel=0_cat.fits cat.fits
@@ -2144,14 +1706,9 @@ $ls
 @end example
 
 @noindent
-When convolution finished, Sufi opened the @file{cat_convolved.fits} file
-and showed the effect of convolution to his student and explained to him
-how a PSF with a larger FWHM would make the points even wider. With the
-convolved image ready, they were prepared to re-sample it to the original
-pixel scale Sufi had planned [from the @command{$ astmkprof -P} command
-above, recall that MakeProfiles had over-sampled the image by 5
-times]. Sufi explained the basic concepts of warping the image to his
-student and ran Warp with the following command:
+When convolution finished, Sufi opened the @file{cat_convolved.fits} file and 
showed the effect of convolution to his student and explained to him how a PSF 
with a larger FWHM would make the points even wider.
+With the convolved image ready, they were prepared to re-sample it to the 
original pixel scale Sufi had planned [from the @command{$ astmkprof -P} 
command above, recall that MakeProfiles had over-sampled the image by 5 times].
+Sufi explained the basic concepts of warping the image to his student and ran 
Warp with the following command:
 
 @example
 $ astwarp --scale=1/5 --centeroncorner cat_convolved.fits
@@ -2174,10 +1731,9 @@ NAXIS2  =                  523 / length of data axis 2
 @end example
 
 @noindent
-@file{cat_convolved_scaled.fits} now has the correct pixel scale. However,
-the image is still larger than what we had wanted, it is 523
-(@mymath{499+12+12}) by 523 pixels. The student is slightly confused, so
-Sufi also re-samples the PSF with the same scale by running
+@file{cat_convolved_scaled.fits} now has the correct pixel scale.
+However, the image is still larger than what we had wanted, it is 523 
(@mymath{499+12+12}) by 523 pixels.
+The student is slightly confused, so Sufi also re-samples the PSF with the 
same scale by running
 
 @example
 $ astwarp --scale=1/5 --centeroncorner 0_cat.fits
@@ -2188,13 +1744,9 @@ NAXIS2  =                   25 / length of data axis 2
 @end example
 
 @noindent
-Sufi notes that @mymath{25=(2\times12)+1} and goes on to explain how
-frequency space convolution will dim the edges and that is why he added the
-@option{--prepforconv} option to MakeProfiles, see @ref{If convolving
-afterwards}. Now that convolution is done, Sufi can remove those extra
-pixels using Crop with the command below. Crop's @option{--section} option
-accepts coordinates inclusively and counting from 1 (according to the FITS
-standard), so the crop region's first pixel has to be 13, not 12.
+Sufi notes that @mymath{25=(2\times12)+1} and goes on to explain how frequency 
space convolution will dim the edges and that is why he added the 
@option{--prepforconv} option to MakeProfiles, see @ref{If convolving 
afterwards}.
+Now that convolution is done, Sufi can remove those extra pixels using Crop 
with the command below.
+Crop's @option{--section} option accepts coordinates inclusively and counting 
from 1 (according to the FITS standard), so the crop region's first pixel has 
to be 13, not 12.
 
 @example
 $ astcrop cat_convolved_scaled.fits --section=13:*-12,13:*-12    \
@@ -2210,19 +1762,13 @@ cat_convolved.fits  cat_convolved_scaled.fits          
cat.txt
 @end example
 
 @noindent
-Finally, @file{cat_convolved_scaled_cropped.fits} is @mymath{499\times499}
-pixels and the mock Andromeda galaxy is centered on the central pixel (open
-the image in a FITS viewer and confirm this by zooming into the center,
-note that an even-width image wouldn't have a central pixel). This is the
-same dimensions as Sufi had desired in the beginning. All this trouble was
-certainly worth it because now there is no dimming on the edges of the
-image and the profile centers are more accurately sampled.
+Finally, @file{cat_convolved_scaled_cropped.fits} is @mymath{499\times499} 
pixels and the mock Andromeda galaxy is centered on the central pixel (open the 
image in a FITS viewer and confirm this by zooming into the center, note that 
an even-width image wouldn't have a central pixel).
+This is the same dimensions as Sufi had desired in the beginning.
+All this trouble was certainly worth it because now there is no dimming on the 
edges of the image and the profile centers are more accurately sampled.
 
-The final step to simulate a real observation would be to add noise to the
-image. Sufi set the zeropoint magnitude to the same value that he set when
-making the mock profiles and looking again at his observation log, he had
-measured the background flux near the nebula had a magnitude of 7 that
-night. So using these values he ran MakeNoise:
+The final step to simulate a real observation would be to add noise to the 
image.
+Sufi set the zeropoint magnitude to the same value that he set when making the 
mock profiles and looking again at his observation log, he had measured the 
background flux near the nebula had a magnitude of 7 that night.
+So using these values he ran MakeNoise:
 
 @example
 $ astmknoise --zeropoint=18 --background=7 --output=out.fits    \
@@ -2238,26 +1784,17 @@ cat_convolved.fits cat_convolved_scaled.fits         
cat.txt
 @end example
 
 @noindent
-The @file{out.fits} file now contains the noised image of the mock catalog
-Sufi had asked for. Seeing how the @option{--output} option allows the user
-to specify the name of the output file, the student was confused and wanted
-to know why Sufi hadn't used it before? Sufi then explained to him that for
-intermediate steps it is best to rely on the automatic output, see
-@ref{Automatic output}. Doing so will give all the intermediate files the
-same basic name structure, so in the end you can simply remove them all
-with the Shell's capabilities. So Sufi decided to show this to the student
-by making a shell script from the commands he had used before.
+The @file{out.fits} file now contains the noised image of the mock catalog 
Sufi had asked for.
+Seeing how the @option{--output} option allows the user to specify the name of 
the output file, the student was confused and wanted to know why Sufi hadn't 
used it before? Sufi then explained to him that for intermediate steps it is 
best to rely on the automatic output, see @ref{Automatic output}.
+Doing so will give all the intermediate files the same basic name structure, 
so in the end you can simply remove them all with the Shell's capabilities.
+So Sufi decided to show this to the student by making a shell script from the 
commands he had used before.
 
-The command-line shell has the capability to read all the separate input
-commands from a file. This is useful when you want to do the same thing
-multiple times, with only the names of the files or minor parameters
-changing between the different instances. Using the shell's history (by
-pressing the up keyboard key) Sufi reviewed all the commands and then he
-retrieved the last 5 commands with the @command{$ history 5} command. He
-selected all those lines he had input and put them in a text file named
-@file{mymock.sh}. Then he defined the @code{edge} and @code{base} shell
-variables for easier customization later. Finally, before every command, he
-added some comments (lines starting with @key{#}) for future readability.
+The command-line shell has the capability to read all the separate input 
commands from a file.
+This is useful when you want to do the same thing multiple times, with only 
the names of the files or minor parameters changing between the different 
instances.
+Using the shell's history (by pressing the up keyboard key) Sufi reviewed all 
the commands and then he retrieved the last 5 commands with the @command{$ 
history 5} command.
+He selected all those lines he had input and put them in a text file named 
@file{mymock.sh}.
+Then he defined the @code{edge} and @code{base} shell variables for easier 
customization later.
+Finally, before every command, he added some comments (lines starting with 
@key{#}) for future readability.
 
 @example
 edge=12
@@ -2296,31 +1833,19 @@ rm 0*.fits "$base"*.fits
 @end example
 
 @cindex Comments
-He used this chance to remind the student of the importance of comments in
-code or shell scripts: when writing the code, you have a good mental
-picture of what you are doing, so writing comments might seem superfluous
-and excessive. However, in one month when you want to re-use the script,
-you have lost that mental picture and remembering it can be time-consuming
-and frustrating. The importance of comments is further amplified when you
-want to share the script with a friend/colleague. So it is good to
-accompany any script/code with useful comments while you are writing it
-(create a good mental picture of what/why you are doing something).
+He used this chance to remind the student of the importance of comments in 
code or shell scripts: when writing the code, you have a good mental picture of 
what you are doing, so writing comments might seem superfluous and excessive.
+However, in one month when you want to re-use the script, you have lost that 
mental picture and remembering it can be time-consuming and frustrating.
+The importance of comments is further amplified when you want to share the 
script with a friend/colleague.
+So it is good to accompany any script/code with useful comments while you are 
writing it (create a good mental picture of what/why you are doing something).
 
 @cindex Gedit
 @cindex GNU Emacs
-Sufi then explained to the eager student that you define a variable by
-giving it a name, followed by an @code{=} sign and the value you
-want. Then you can reference that variable from anywhere in the script
-by calling its name with a @code{$} prefix. So in the script whenever
-you see @code{$base}, the value we defined for it above is used. If
-you use advanced editors like GNU Emacs or even simpler ones like
-Gedit (part of the GNOME graphical user interface) the variables will
-become a different color which can really help in understanding the
-script. We have put all the @code{$base} variables in double quotation
-marks (@code{"}) so the variable name and the following text do not
-get mixed, the shell is going to ignore the @code{"} after replacing
-the variable value. To make the script executable, Sufi ran the
-following command:
+Sufi then explained to the eager student that you define a variable by giving 
it a name, followed by an @code{=} sign and the value you want.
+Then you can reference that variable from anywhere in the script by calling 
its name with a @code{$} prefix.
+So in the script whenever you see @code{$base}, the value we defined for it 
above is used.
+If you use advanced editors like GNU Emacs or even simpler ones like Gedit 
(part of the GNOME graphical user interface) the variables will become a 
different color which can really help in understanding the script.
+We have put all the @code{$base} variables in double quotation marks 
(@code{"}) so the variable name and the following text do not get mixed, the 
shell is going to ignore the @code{"} after replacing the variable value.
+To make the script executable, Sufi ran the following command:
 
 @example
 $ chmod +x mymock.sh
@@ -2333,40 +1858,21 @@ Then finally, Sufi ran the script, simply by calling 
its file name:
 $ ./mymock.sh
 @end example
 
-After the script finished, the only file remaining is the @file{out.fits}
-file that Sufi had wanted in the beginning.  Sufi then explained to the
-student how he could run this script anywhere that he has a catalog if the
-script is in the same directory. The only thing the student had to modify
-in the script was the name of the catalog (the value of the @code{base}
-variable in the start of the script) and the value to the @code{edge}
-variable if he changed the PSF size. The student was also happy to hear
-that he won't need to make it executable again when he makes changes later,
-it will remain executable unless he explicitly changes the executable flag
-with @command{chmod}.
-
-The student was really excited, since now, through simple shell
-scripting, he could really speed up his work and run any command in
-any fashion he likes allowing him to be much more creative in his
-works. Until now he was using the graphical user interface which
-doesn't have such a facility and doing repetitive things on it was
-really frustrating and some times he would make mistakes. So he left
-to go and try scripting on his own computer.
-
-Sufi could now get back to his own work and see if the simulated
-nebula which resembled the one in the Andromeda constellation could be
-detected or not. Although it was extremely faint@footnote{The
-brightness of a diffuse object is added over all its pixels to give
-its final magnitude, see @ref{Flux Brightness and magnitude}. So
-although the magnitude 3.44 (of the mock nebula) is orders of
-magnitude brighter than 6 (of the stars), the central galaxy is much
-fainter. Put another way, the brightness is distributed over a large
-area in the case of a nebula.}, fortunately it passed his detection
-tests and he wrote it in the draft manuscript that would later become
-``Book of fixed stars''. He still had to check the other nebula he saw
-from Yemen and several other such objects, but they could wait until
-tomorrow (thanks to the shell script, he only has to define a new
-catalog). It was nearly sunset and they had to begin preparing for the
-night's measurements on the ecliptic.
+After the script finished, the only file remaining is the @file{out.fits} file 
that Sufi had wanted in the beginning.
+Sufi then explained to the student how he could run this script anywhere that 
he has a catalog if the script is in the same directory.
+The only thing the student had to modify in the script was the name of the 
catalog (the value of the @code{base} variable in the start of the script) and 
the value to the @code{edge} variable if he changed the PSF size.
+The student was also happy to hear that he won't need to make it executable 
again when he makes changes later, it will remain executable unless he 
explicitly changes the executable flag with @command{chmod}.
+
+The student was really excited, since now, through simple shell scripting, he 
could really speed up his work and run any command in any fashion he likes 
allowing him to be much more creative in his works.
+Until now he was using the graphical user interface which doesn't have such a 
facility and doing repetitive things on it was really frustrating and some 
times he would make mistakes.
+So he left to go and try scripting on his own computer.
+
+Sufi could now get back to his own work and see if the simulated nebula which 
resembled the one in the Andromeda constellation could be detected or not.
+Although it was extremely faint@footnote{The brightness of a diffuse object is 
added over all its pixels to give its final magnitude, see @ref{Flux Brightness 
and magnitude}.
+So although the magnitude 3.44 (of the mock nebula) is orders of magnitude 
brighter than 6 (of the stars), the central galaxy is much fainter.
+Put another way, the brightness is distributed over a large area in the case 
of a nebula.}, fortunately it passed his detection tests and he wrote it in the 
draft manuscript that would later become ``Book of fixed stars''.
+He still had to check the other nebula he saw from Yemen and several other 
such objects, but they could wait until tomorrow (thanks to the shell script, 
he only has to define a new catalog).
+It was nearly sunset and they had to begin preparing for the night's 
measurements on the ecliptic.
 
 
 @menu
@@ -2378,51 +1884,29 @@ night's measurements on the ecliptic.
 
 @cindex Hubble Space Telescope (HST)
 @cindex Colors, broad-band photometry
-Measuring colors of astronomical objects in broad-band or narrow-band
-images is one of the most basic and common steps in astronomical
-analysis. Here, we will use Gnuastro's programs to get a physical scale
-(area at certain redshifts) of the field we are studying, detect objects in
-a Hubble Space Telescope (HST) image, measure their colors and identify the
-ones with the strongest colors, do a visual inspection of these objects and
-inspect spatial position in the image. After this tutorial, you can also
-try the @ref{Detecting large extended targets} tutorial which goes into a
-little more detail on detecting very low surface brightness signal.
-
-During the tutorial, we will take many detours to explain, and practically
-demonstrate, the many capabilities of Gnuastro's programs. In the end you
-will see that the things you learned during this tutorial are much more
-generic than this particular problem and can be used in solving a wide
-variety of problems involving the analysis of data (images or tables). So
-please don't rush, and go through the steps patiently to optimally master
-Gnuastro.
+Measuring colors of astronomical objects in broad-band or narrow-band images 
is one of the most basic and common steps in astronomical analysis.
+Here, we will use Gnuastro's programs to get a physical scale (area at certain 
redshifts) of the field we are studying, detect objects in a Hubble Space 
Telescope (HST) image, measure their colors and identify the ones with the 
strongest colors, do a visual inspection of these objects and inspect spatial 
position in the image.
+After this tutorial, you can also try the @ref{Detecting large extended 
targets} tutorial which goes into a little more detail on detecting very low 
surface brightness signal.
+
+During the tutorial, we will take many detours to explain, and practically 
demonstrate, the many capabilities of Gnuastro's programs.
+In the end you will see that the things you learned during this tutorial are 
much more generic than this particular problem and can be used in solving a 
wide variety of problems involving the analysis of data (images or tables).
+So please don't rush, and go through the steps patiently to optimally master 
Gnuastro.
 
 @cindex XDF survey
 @cindex eXtreme Deep Field (XDF) survey
-In this tutorial, we'll use the HST
-@url{https://archive.stsci.edu/prepds/xdf, eXtreme Deep Field}
-dataset. Like almost all astronomical surveys, this dataset is free for
-download and usable by the public. You will need the following tools in
-this tutorial: Gnuastro, SAO DS9 @footnote{See @ref{SAO ds9}, available at
-@url{http://ds9.si.edu/site/Home.html}}, GNU
-Wget@footnote{@url{https://www.gnu.org/software/wget}}, and AWK (most
-common implementation is GNU
-AWK@footnote{@url{https://www.gnu.org/software/gawk}}).
-
-This tutorial was first prepared for the ``Exploring the Ultra-Low Surface
-Brightness Universe'' workshop (November 2017) at the ISSI in Bern,
-Switzerland. It was further extended in the ``4th Indo-French Astronomy
-School'' (July 2018) organized by LIO, CRAL CNRS UMR5574, UCBL, and IUCAA
-in Lyon, France. We are very grateful to the organizers of these workshops
-and the attendees for the very fruitful discussions and suggestions that
-made this tutorial possible.
+In this tutorial, we'll use the HST@url{https://archive.stsci.edu/prepds/xdf, 
eXtreme Deep Field} dataset.
+Like almost all astronomical surveys, this dataset is free for download and 
usable by the public.
+You will need the following tools in this tutorial: Gnuastro, SAO DS9 
@footnote{See @ref{SAO ds9}, available at 
@url{http://ds9.si.edu/site/Home.html}}, GNU 
Wget@footnote{@url{https://www.gnu.org/software/wget}}, and AWK (most common 
implementation is GNU AWK@footnote{@url{https://www.gnu.org/software/gawk}}).
+
+This tutorial was first prepared for the ``Exploring the Ultra-Low Surface 
Brightness Universe'' workshop (November 2017) at the ISSI in Bern, Switzerland.
+It was further extended in the ``4th Indo-French Astronomy School'' (July 
2018) organized by LIO, CRAL CNRS UMR5574, UCBL, and IUCAA in Lyon, France.
+We are very grateful to the organizers of these workshops and the attendees 
for the very fruitful discussions and suggestions that made this tutorial 
possible.
 
 @cartouche
 @noindent
-@strong{Write the example commands manually:} Try to type the example
-commands on your terminal manually and use the history feature of your
-command-line (by pressing the ``up'' button to retrieve previous
-commands). Don't simply copy and paste the commands shown here. This will
-help simulate future situations when you are processing your own datasets.
+@strong{Write the example commands manually:} Try to type the example commands 
on your terminal manually and use the history feature of your command-line (by 
pressing the ``up'' button to retrieve previous commands).
+Don't simply copy and paste the commands shown here.
+This will help simulate future situations when you are processing your own 
datasets.
 @end cartouche
 
 
@@ -2441,54 +1925,40 @@ help simulate future situations when you are processing 
your own datasets.
 * NoiseChisel optimization for storage::  Dramatically decrease output's 
volume.
 * Segmentation and making a catalog::  Finding true peaks and creating a 
catalog.
 * Working with catalogs estimating colors::  Estimating colors using the 
catalogs.
-* Aperture photomery::          Doing photometry on a fixed aperture.
+* Aperture photometry::         Doing photometry on a fixed aperture.
 * Finding reddest clumps and visual inspection::  Selecting some targets and 
inspecting them.
 * Citing and acknowledging Gnuastro::  How to cite and acknowledge Gnuastro in 
your papers.
 @end menu
 
 @node Calling Gnuastro's programs, Accessing documentation, General program 
usage tutorial, General program usage tutorial
 @subsection Calling Gnuastro's programs
-A handy feature of Gnuastro is that all program names start with
-@code{ast}. This will allow your command-line processor to easily list and
-auto-complete Gnuastro's programs for you.  Try typing the following
-command (press @key{TAB} key when you see @code{<TAB>}) to see the list:
+A handy feature of Gnuastro is that all program names start with @code{ast}.
+This will allow your command-line processor to easily list and auto-complete 
Gnuastro's programs for you.
+Try typing the following command (press @key{TAB} key when you see 
@code{<TAB>}) to see the list:
 
 @example
 $ ast<TAB><TAB>
 @end example
 
 @noindent
-Any program that starts with @code{ast} (including all Gnuastro programs)
-will be shown. By choosing the subsequent characters of your desired
-program and pressing @key{<TAB><TAB>} again, the list will narrow down and
-the program name will auto-complete once your input characters are
-unambiguous. In short, you often don't need to type the full name of the
-program you want to run.
+Any program that starts with @code{ast} (including all Gnuastro programs) will 
be shown.
+By choosing the subsequent characters of your desired program and pressing 
@key{<TAB><TAB>} again, the list will narrow down and the program name will 
auto-complete once your input characters are unambiguous.
+In short, you often don't need to type the full name of the program you want 
to run.
 
 @node Accessing documentation, Setup and data download, Calling Gnuastro's 
programs, General program usage tutorial
 @subsection Accessing documentation
 
-Gnuastro contains a large number of programs and it is natural to forget
-the details of each program's options or inputs and outputs. Therefore,
-before starting the analysis steps of this tutorial, let's review how you
-can access this book to refresh your memory any time you want, without
-having to take your hands off the keyboard.
+Gnuastro contains a large number of programs and it is natural to forget the 
details of each program's options or inputs and outputs.
+Therefore, before starting the analysis steps of this tutorial, let's review 
how you can access this book to refresh your memory any time you want, without 
having to take your hands off the keyboard.
 
-When you install Gnuastro, this book is also installed on your system along
-with all the programs and libraries, so you don't need an internet
-connection to to access/read it. Also, by accessing this book as described
-below, you can be sure that it corresponds to your installed version of
-Gnuastro.
+When you install Gnuastro, this book is also installed on your system along 
with all the programs and libraries, so you don't need an internet connection 
to to access/read it.
+Also, by accessing this book as described below, you can be sure that it 
corresponds to your installed version of Gnuastro.
 
 @cindex GNU Info
-GNU Info@footnote{GNU Info is already available on almost all Unix-like
-operating systems.} is the program in charge of displaying the manual on
-the command-line (for more, see @ref{Info}). To see this whole book on your
-command-line, please run the following command and press subsequent
-keys. Info has its own mini-environment, therefore we'll show the keys that
-must be pressed in the mini-environment after a @code{->} sign. You can
-also ignore anything after the @code{#} sign in the middle of the line,
-they are only for your information.
+GNU Info@footnote{GNU Info is already available on almost all Unix-like 
operating systems.} is the program in charge of displaying the manual on the 
command-line (for more, see @ref{Info}).
+To see this whole book on your command-line, please run the following command 
and press subsequent keys.
+Info has its own mini-environment, therefore we'll show the keys that must be 
pressed in the mini-environment after a @code{->} sign.
+You can also ignore anything after the @code{#} sign in the middle of the 
line, they are only for your information.
 
 @example
 $ info gnuastro                # Open the top of the manual.
@@ -2498,62 +1968,47 @@ $ info gnuastro                # Open the top of the 
manual.
 -> q                           # Quit Info, return to the command-line.
 @end example
 
-The thing that greatly simplifies navigation in Info is the links (regions
-with an underline). You can immediately go to the next link in the page
-with the @key{<TAB>} key and press @key{<ENTER>} on it to go into that part
-of the manual. Try the commands above again, but this time also use
-@key{<TAB>} to go to the links and press @key{<ENTER>} on them to go to the
-respective section of the book. Then follow a few more links and go deeper
-into the book. To return to the previous page, press @key{l} (small L). If
-you are searching for a specific phrase in the whole book (for example an
-option name), press @key{s} and type your search phrase and end it with an
-@key{<ENTER>}.
+The thing that greatly simplifies navigation in Info is the links (regions 
with an underline).
+You can immediately go to the next link in the page with the @key{<TAB>} key 
and press @key{<ENTER>} on it to go into that part of the manual.
+Try the commands above again, but this time also use @key{<TAB>} to go to the 
links and press @key{<ENTER>} on them to go to the respective section of the 
book.
+Then follow a few more links and go deeper into the book.
+To return to the previous page, press @key{l} (small L).
+If you are searching for a specific phrase in the whole book (for example an 
option name), press @key{s} and type your search phrase and end it with an 
@key{<ENTER>}.
 
-You don't need to start from the top of the manual every time. For example,
-to get to @ref{Invoking astnoisechisel}, run the following command. In
-general, all programs have such an ``Invoking ProgramName'' section in this
-book. These sections are specifically for the description of inputs,
-outputs and configuration options of each program. You can access them
-directly for each program by giving its executable name to Info.
+You don't need to start from the top of the manual every time.
+For example, to get to @ref{Invoking astnoisechisel}, run the following 
command.
+In general, all programs have such an ``Invoking ProgramName'' section in this 
book.
+These sections are specifically for the description of inputs, outputs and 
configuration options of each program.
+You can access them directly for each program by giving its executable name to 
Info.
 
 @example
 $ info astnoisechisel
 @end example
 
-The other sections don't have such shortcuts. To directly access them from
-the command-line, you need to tell Info to look into Gnuastro's manual,
-then look for the specific section (an unambiguous title is necessary). For
-example, if you only want to review/remember NoiseChisel's @ref{Detection
-options}), just run the following command. Note how case is irrelevant for
-Info when calling a title in this manner.
+The other sections don't have such shortcuts.
+To directly access them from the command-line, you need to tell Info to look 
into Gnuastro's manual, then look for the specific section (an unambiguous 
title is necessary).
+For example, if you only want to review/remember NoiseChisel's @ref{Detection 
options}), just run the following command.
+Note how case is irrelevant for Info when calling a title in this manner.
 
 @example
 $ info gnuastro "Detection options"
 @end example
 
-In general, Info is a powerful and convenient way to access this whole book
-with detailed information about the programs you are running. If you are
-not already familiar with it, please run the following command and just
-read along and do what it says to learn it. Don't stop until you feel
-sufficiently fluent in it. Please invest the half an hour's time necessary
-to start using Info comfortably. It will greatly improve your productivity
-and you will start reaping the rewards of this investment very soon.
+In general, Info is a powerful and convenient way to access this whole book 
with detailed information about the programs you are running.
+If you are not already familiar with it, please run the following command and 
just read along and do what it says to learn it.
+Don't stop until you feel sufficiently fluent in it.
+Please invest the half an hour's time necessary to start using Info 
comfortably.
+It will greatly improve your productivity and you will start reaping the 
rewards of this investment very soon.
 
 @example
 $ info info
 @end example
 
-As a good scientist you need to feel comfortable to play with the
-features/options and avoid (be critical to) using default values as much as
-possible. On the other hand, our human memory is limited, so it is
-important to be able to easily access any part of this book fast and
-remember the option names, what they do and their acceptable values.
+As a good scientist you need to feel comfortable to play with the 
features/options and avoid (be critical to) using default values as much as 
possible.
+On the other hand, our human memory is limited, so it is important to be able 
to easily access any part of this book fast and remember the option names, what 
they do and their acceptable values.
 
-If you just want the option names and a short description, calling the
-program with the @option{--help} option might also be a good solution like
-the first example below. If you know a few characters of the option name,
-you can feed the output to @command{grep} like the second or third example
-commands.
+If you just want the option names and a short description, calling the program 
with the @option{--help} option might also be a good solution like the first 
example below.
+If you know a few characters of the option name, you can feed the output to 
@command{grep} like the second or third example commands.
 
 @example
 $ astnoisechisel --help
@@ -2564,28 +2019,23 @@ $ astnoisechisel --help | grep check
 @node Setup and data download, Dataset inspection and cropping, Accessing 
documentation, General program usage tutorial
 @subsection Setup and data download
 
-The first step in the analysis of the tutorial is to download the necessary
-input datasets. First, to keep things clean, let's create a
-@file{gnuastro-tutorial} directory and continue all future steps in it:
+The first step in the analysis of the tutorial is to download the necessary 
input datasets.
+First, to keep things clean, let's create a @file{gnuastro-tutorial} directory 
and continue all future steps in it:
 
 @example
 $ mkdir gnuastro-tutorial
 $ cd gnuastro-tutorial
 @end example
 
-We will be using the near infra-red @url{http://www.stsci.edu/hst/wfc3,
-Wide Field Camera} dataset. If you already have them in another directory
-(for example @file{XDFDIR}, with the same FITS file names), you can set the
-@file{download} directory to be a symbolic link to @file{XDFDIR} with a
-command like this:
+We will be using the near infra-red @url{http://www.stsci.edu/hst/wfc3, Wide 
Field Camera} dataset.
+If you already have them in another directory (for example @file{XDFDIR}, with 
the same FITS file names), you can set the @file{download} directory to be a 
symbolic link to @file{XDFDIR} with a command like this:
 
 @example
 $ ln -s XDFDIR download
 @end example
 
 @noindent
-Otherwise, when the following images aren't already present on your system,
-you can make a @file{download} directory and download them there.
+Otherwise, when the following images aren't already present on your system, 
you can make a @file{download} directory and download them there.
 
 @example
 $ mkdir download
@@ -2597,15 +2047,11 @@ $ cd ..
 @end example
 
 @noindent
-In this tutorial, we'll just use these two filters. Later, you may need to
-download more filters. To do that, you can use the shell's @code{for} loop
-to download them all in series (one after the other@footnote{Note that you
-only have one port to the internet, so downloading in parallel will
-actually be slower than downloading in series.}) with one command like the
-one below for the WFC3 filters. Put this command instead of the two
-@code{wget} commands above. Recall that all the extra spaces, back-slashes
-(@code{\}), and new lines can be ignored if you are typing on the lines on
-the terminal.
+In this tutorial, we'll just use these two filters.
+Later, you may need to download more filters.
+To do that, you can use the shell's @code{for} loop to download them all in 
series (one after the other@footnote{Note that you only have one port to the 
internet, so downloading in parallel will actually be slower than downloading 
in series.}) with one command like the one below for the WFC3 filters.
+Put this command instead of the two @code{wget} commands above.
+Recall that all the extra spaces, back-slashes (@code{\}), and new lines can 
be ignored if you are typing on the lines on the terminal.
 
 @example
 $ for f in f105w f125w f140w f160w; do                              \
@@ -2617,49 +2063,33 @@ $ for f in f105w f125w f140w f160w; do                  
            \
 @node Dataset inspection and cropping, Angular coverage on the sky, Setup and 
data download, General program usage tutorial
 @subsection Dataset inspection and cropping
 
-First, let's visually inspect the datasets we downloaded in @ref{Setup and
-data download}. Let's take F160W image as an example. Do the steps below
-with the other image(s) too (and later with any dataset that you want to
-work on). It is very important to get a good visual feeling of the dataset
-you intend to use. Also, note how SAO DS9 (used here for visual inspection
-of FITS images) doesn't follow the GNU style of options where ``long'' and
-``short'' options are preceded by @option{--} and @option{-} respectively
-(for example @option{--width} and @option{-w}, see @ref{Options}).
-
-Run the command below to see the F160W image with DS9. Ds9's
-@option{-zscale} scaling is good to visually highlight the low surface
-brightness regions, and as the name suggests, @option{-zoom to fit} will
-fit the whole dataset in the window. If the window is too small, expand it
-with your mouse, then press the ``zoom'' button on the top row of buttons
-above the image. Afterwards, in the bottom row of buttons, press ``zoom
-fit''. You can also zoom in and out by scrolling your mouse or the
-respective operation on your touch-pad when your cursor/pointer is over the
-image.
+First, let's visually inspect the datasets we downloaded in @ref{Setup and 
data download}.
+Let's take F160W image as an example.
+Do the steps below with the other image(s) too (and later with any dataset 
that you want to work on).
+It is very important to get a good visual feeling of the dataset you intend to 
use.
+Also, note how SAO DS9 (used here for visual inspection of FITS images) 
doesn't follow the GNU style of options where ``long'' and ``short'' options 
are preceded by @option{--} and @option{-} respectively (for example 
@option{--width} and @option{-w}, see @ref{Options}).
+
+Run the command below to see the F160W image with DS9.
+Ds9's @option{-zscale} scaling is good to visually highlight the low surface 
brightness regions, and as the name suggests, @option{-zoom to fit} will fit 
the whole dataset in the window.
+If the window is too small, expand it with your mouse, then press the ``zoom'' 
button on the top row of buttons above the image.
+Afterwards, in the bottom row of buttons, press ``zoom fit''.
+You can also zoom in and out by scrolling your mouse or the respective 
operation on your touch-pad when your cursor/pointer is over the image.
 
 @example
 $ ds9 download/hlsp_xdf_hst_wfc3ir-60mas_hudf_f160w_v1_sci.fits     \
       -zscale -zoom to fit
 @end example
 
-As you hover your mouse over the image, notice how the ``Value'' and
-positional fields on the top of the ds9 window get updated. The first thing
-you might notice is that when you hover the mouse over the regions with no
-data, they have a value of zero. The next thing might be that the dataset
-actually has two ``depth''s (see @ref{Quantifying measurement
-limits}). Recall that this is a combined/reduced image of many exposures,
-and the parts that have more exposures are deeper. In particular, the
-exposure time of the deep inner region is larger than 4 times of the outer
-(more shallower) parts.
+As you hover your mouse over the image, notice how the ``Value'' and 
positional fields on the top of the ds9 window get updated.
+The first thing you might notice is that when you hover the mouse over the 
regions with no data, they have a value of zero.
+The next thing might be that the dataset actually has two ``depth''s (see 
@ref{Quantifying measurement limits}).
+Recall that this is a combined/reduced image of many exposures, and the parts 
that have more exposures are deeper.
+In particular, the exposure time of the deep inner region is larger than 4 
times of the outer (more shallower) parts.
 
-To simplify the analysis in this tutorial, we'll only be working on the
-deep field, so let's crop it out of the full dataset. Fortunately the XDF
-survey webpage (above) contains the vertices of the deep flat WFC3-IR
-field. With Gnuastro's Crop program@footnote{To learn more about the crop
-program see @ref{Crop}.}, you can use those vertices to cutout this deep
-region from the larger image. But before that, to keep things organized,
-let's make a directory called @file{flat-ir} and keep the flat
-(single-depth) regions in that directory (with a `@file{xdf-}' suffix for a
-shorter and easier filename).
+To simplify the analysis in this tutorial, we'll only be working on the deep 
field, so let's crop it out of the full dataset.
+Fortunately the XDF survey webpage (above) contains the vertices of the deep 
flat WFC3-IR field.
+With Gnuastro's Crop program@footnote{To learn more about the crop program see 
@ref{Crop}.}, you can use those vertices to cutout this deep region from the 
larger image.
+But before that, to keep things organized, let's make a directory called 
@file{flat-ir} and keep the flat (single-depth) regions in that directory (with 
a `@file{xdf-}' suffix for a shorter and easier filename).
 
 @example
 $ mkdir flat-ir
@@ -2673,15 +2103,11 @@ $ astcrop --mode=wcs -h0 
--output=flat-ir/xdf-f160w.fits              \
           download/hlsp_xdf_hst_wfc3ir-60mas_hudf_f160w_v1_sci.fits
 @end example
 
-The only thing varying in the two calls to Gnuastro's Crop program is the
-filter name. Therefore, to simplify the command, and later allow work on
-more filters, we can use the shell's @code{for} loop. Notice how the two
-places where the filter names (@file{f105w} and @file{f160w}) are used
-above have been replaced with @file{$f} (the shell variable that @code{for}
-will update in every loop) below. In such cases, you should generally avoid
-repeating a command manually and use loops like below. To generalize this
-for more filters later, you can simply add the other filter names in the
-first line before the semi-colon (@code{;}).
+The only thing varying in the two calls to Gnuastro's Crop program is the 
filter name.
+Therefore, to simplify the command, and later allow work on more filters, we 
can use the shell's @code{for} loop.
+Notice how the two places where the filter names (@file{f105w} and 
@file{f160w}) are used above have been replaced with @file{$f} (the shell 
variable that @code{for} will update in every loop) below.
+In such cases, you should generally avoid repeating a command manually and use 
loops like below.
+To generalize this for more filters later, you can simply add the other filter 
names in the first line before the semi-colon (@code{;}).
 
 @example
 $ rm flat-ir/*.fits
@@ -2693,22 +2119,14 @@ $ for f in f105w f160w; do                              
              \
   done
 @end example
 
-Please open these images and inspect them with the same @command{ds9}
-command you used above. You will see how it is nicely flat now and doesn't
-have varying depths. Another important result of this crop is that regions
-with no data now have a NaN (Not-a-Number, or a blank value) value, not
-zero. Zero is a number, and is thus meaningful, especially when you later
-want to NoiseChisel@footnote{As you will see below, unlike most other
-detection algorithms, NoiseChisel detects the objects from their faintest
-parts, it doesn't start with their high signal-to-noise ratio peaks. Since
-the Sky is already subtracted in many images and noise fluctuates around
-zero, zero is commonly higher than the initial threshold applied. Therefore
-not ignoring zero-valued pixels in this image, will cause them to part of
-the detections!}. Generally, when you want to ignore some pixels in a
-dataset, and avoid higher-level ambiguities or complications, it is always
-best to give them blank values (not zero, or some other absurdly large or
-small number). Gnuastro has the Arithmetic program for such cases, and
-we'll introduce it during this tutorial.
+Please open these images and inspect them with the same @command{ds9} command 
you used above.
+You will see how it is nicely flat now and doesn't have varying depths.
+Another important result of this crop is that regions with no data now have a 
NaN (Not-a-Number, or a blank value) value, not zero.
+Zero is a number, and is thus meaningful, especially when you later want to 
NoiseChisel@footnote{As you will see below, unlike most other detection 
algorithms, NoiseChisel detects the objects from their faintest parts, it 
doesn't start with their high signal-to-noise ratio peaks.
+Since the Sky is already subtracted in many images and noise fluctuates around 
zero, zero is commonly higher than the initial threshold applied.
+Therefore not ignoring zero-valued pixels in this image, will cause them to 
part of the detections!}.
+Generally, when you want to ignore some pixels in a dataset, and avoid 
higher-level ambiguities or complications, it is always best to give them blank 
values (not zero, or some other absurdly large or small number).
+Gnuastro has the Arithmetic program for such cases, and we'll introduce it 
during this tutorial.
 
 @node Angular coverage on the sky, Cosmological coverage, Dataset inspection 
and cropping, General program usage tutorial
 @subsection Angular coverage on the sky
@@ -2716,44 +2134,29 @@ we'll introduce it during this tutorial.
 @cindex @code{CDELT}
 @cindex Coordinate scales
 @cindex Scales, coordinate
-This is the deepest image we currently have of the sky. The first thing
-that comes to mind may be this: ``How large is this field on the
-sky?''. The FITS world coordinate system (WCS) meta data standard contains
-the key to answering this question: the @code{CDELT} keyword@footnote{In
-the FITS standard, the @code{CDELT} keywords (@code{CDELT1} and
-@code{CDELT2} in a 2D image) specify the scales of each coordinate. In the
-case of this image it is in units of degrees-per-pixel. See Section 8 of
-the @url{https://fits.gsfc.nasa.gov/standard40/fits_standard40aa-le.pdf,
-FITS standard} for more. In short, with the @code{CDELT} convention,
-rotation (@code{PC} or @code{CD} keywords) and scales (@code{CDELT}) are
-separated. In the FITS standard the @code{CDELT} keywords are
-optional. When @code{CDELT} keywords aren't present, the @code{PC} matrix
-is assumed to contain @emph{both} the coordinate rotation and scales. Note
-that not all FITS writers use the @code{CDELT} convention. So you might not
-find the @code{CDELT} keywords in the WCS meta data of some FITS
-files. However, all Gnuastro programs (which use the default FITS keyword
-writing format of WCSLIB) write their output WCS with the the @code{CDELT}
-convention, even if the input doesn't have it. If your dataset doesn't use
-the @code{CDELT} convension, you can feed it to any (simple) Gnuastro
-program (for example Arithmetic) and the output will have the @code{CDELT}
-keyword.}.
-
-With the commands below, we'll use @code{CDELT} (along with the image size)
-to find the answer. The lines starting with @code{##} are just comments for
-you to read and understand each command. Don't type them on the
-terminal. The commands are intentionally repetitive in some places to
-better understand each step and also to demonstrate the beauty of
-command-line features like history, variables, pipes and loops (which you
-will commonly use as you master the command-line).
+This is the deepest image we currently have of the sky.
+The first thing that comes to mind may be this: ``How large is this field on 
the sky?''.
+The FITS world coordinate system (WCS) meta data standard contains the key to 
answering this question: the @code{CDELT} keyword@footnote{In the FITS 
standard, the @code{CDELT} keywords (@code{CDELT1} and @code{CDELT2} in a 2D 
image) specify the scales of each coordinate.
+In the case of this image it is in units of degrees-per-pixel.
+See Section 8 of the 
@url{https://fits.gsfc.nasa.gov/standard40/fits_standard40aa-le.pdf, FITS 
standard} for more.
+In short, with the @code{CDELT} convention, rotation (@code{PC} or @code{CD} 
keywords) and scales (@code{CDELT}) are separated.
+In the FITS standard the @code{CDELT} keywords are optional.
+When @code{CDELT} keywords aren't present, the @code{PC} matrix is assumed to 
contain @emph{both} the coordinate rotation and scales.
+Note that not all FITS writers use the @code{CDELT} convention.
+So you might not find the @code{CDELT} keywords in the WCS meta data of some 
FITS files.
+However, all Gnuastro programs (which use the default FITS keyword writing 
format of WCSLIB) write their output WCS with the @code{CDELT} convention, even 
if the input doesn't have it.
+If your dataset doesn't use the @code{CDELT} convention, you can feed it to 
any (simple) Gnuastro program (for example Arithmetic) and the output will have 
the @code{CDELT} keyword.}.
+
+With the commands below, we'll use @code{CDELT} (along with the image size) to 
find the answer.
+The lines starting with @code{##} are just comments for you to read and 
understand each command.
+Don't type them on the terminal.
+The commands are intentionally repetitive in some places to better understand 
each step and also to demonstrate the beauty of command-line features like 
history, variables, pipes and loops (which you will commonly use as you master 
the command-line).
 
 @cartouche
 @noindent
-@strong{Use shell history:} Don't forget to make effective use of your
-shell's history: you don't have to re-type previous command to add
-something to them. This is especially convenient when you just want to make
-a small change to your previous command. Press the ``up'' key on your
-keyboard (possibly multiple times) to see your previous command(s) and
-modify them accordingly.
+@strong{Use shell history:} Don't forget to make effective use of your shell's 
history: you don't have to re-type previous command to add something to them.
+This is especially convenient when you just want to make a small change to 
your previous command.
+Press the ``up'' key on your keyboard (possibly multiple times) to see your 
previous command(s) and modify them accordingly.
 @end cartouche
 
 @example
@@ -2799,35 +2202,26 @@ $ echo $n $r
 $ echo $n $r | awk '@{print $1 * ($2^2) * 3600@}'
 @end example
 
-The output of the last command (area of this field) is 4.03817 (or
-approximately 4.04) arc-minutes squared. Just for comparison, this is
-roughly 175 times smaller than the average moon's angular area (with a
-diameter of 30arc-minutes or half a degree).
+The output of the last command (area of this field) is 4.03817 (or 
approximately 4.04) arc-minutes squared.
+Just for comparison, this is roughly 175 times smaller than the average moon's 
angular area (with a diameter of 30arc-minutes or half a degree).
 
 @cindex GNU AWK
 @cartouche
 @noindent
-@strong{AWK for table/value processing:} As you saw above AWK is a powerful
-and simple tool for text processing. You will see it often in shell
-scripts. GNU AWK (the most common implementation) comes with a free and
-wonderful @url{https://www.gnu.org/software/gawk/manual/, book} in the same
-format as this book which will allow you to master it nicely. Just like
-this manual, you can also access GNU AWK's manual on the command-line
-whenever necessary without taking your hands off the keyboard. Just run
-@code{info awk}.
+@strong{AWK for table/value processing:} As you saw above AWK is a powerful 
and simple tool for text processing.
+You will see it often in shell scripts.
+GNU AWK (the most common implementation) comes with a free and wonderful 
@url{https://www.gnu.org/software/gawk/manual/, book} in the same format as 
this book which will allow you to master it nicely.
+Just like this manual, you can also access GNU AWK's manual on the 
command-line whenever necessary without taking your hands off the keyboard.
+Just run @code{info awk}.
 @end cartouche
 
 
 @node Cosmological coverage, Building custom programs with the library, 
Angular coverage on the sky, General program usage tutorial
 @subsection Cosmological coverage
-Having found the angular coverage of the dataset in @ref{Angular coverage
-on the sky}, we can now use Gnuastro to answer a more physically motivated
-question: ``How large is this area at different redshifts?''. To get a
-feeling of the tangential area that this field covers at redshift 2, you
-can use Gnuastro's CosmicCalcular program (@ref{CosmicCalculator}). In
-particular, you need the tangential distance covered by 1 arc-second as raw
-output. Combined with the field's area that was measured before, we can
-calculate the tangential distance in Mega Parsecs squared (@mymath{Mpc^2}).
+Having found the angular coverage of the dataset in @ref{Angular coverage on 
the sky}, we can now use Gnuastro to answer a more physically motivated 
question: ``How large is this area at different redshifts?''.
+To get a feeling of the tangential area that this field covers at redshift 2, 
you can use Gnuastro's CosmicCalcular program (@ref{CosmicCalculator}).
+In particular, you need the tangential distance covered by 1 arc-second as raw 
output.
+Combined with the field's area that was measured before, we can calculate the 
tangential distance in Mega Parsecs squared (@mymath{Mpc^2}).
 
 @example
 ## Print general cosmological properties at redshift 2 (for example).
@@ -2863,9 +2257,8 @@ $ echo $k $a | awk '@{print $1 * $2 / 1e6@}'
 @end example
 
 @noindent
-At redshift 2, this field therefore covers approximately 1.07
-@mymath{Mpc^2}. If you would like to see how this tangential area changes
-with redshift, you can use a shell loop like below.
+At redshift 2, this field therefore covers approximately 1.07 @mymath{Mpc^2}.
+If you would like to see how this tangential area changes with redshift, you 
can use a shell loop like below.
 
 @example
 $ for z in 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0; do        \
@@ -2875,11 +2268,9 @@ $ for z in 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0; do   
     \
 @end example
 
 @noindent
-Fortunately, the shell has a useful tool/program to print a sequence of
-numbers that is nicely called @code{seq}. You can use it instead of typing
-all the different redshifts in this example. For example the loop below
-will calculate and print the tangential coverage of this field across a
-larger range of redshifts (0.1 to 5) and with finer increments of 0.1.
+Fortunately, the shell has a useful tool/program to print a sequence of 
numbers that is nicely called @code{seq}.
+You can use it instead of typing all the different redshifts in this example.
+For example the loop below will calculate and print the tangential coverage of 
this field across a larger range of redshifts (0.1 to 5) and with finer 
increments of 0.1.
 
 @example
 $ for z in $(seq 0.1 0.1 5); do                                  \
@@ -2891,30 +2282,18 @@ $ for z in $(seq 0.1 0.1 5); do                         
         \
 
 @node Building custom programs with the library, Option management and 
configuration files, Cosmological coverage, General program usage tutorial
 @subsection Building custom programs with the library
-In @ref{Cosmological coverage}, we repeated a certain calculation/output of
-a program multiple times using the shell's @code{for} loop. This simple way
-repeating a calculation is great when it is only necessary once. However,
-if you commonly need this calculation and possibly for a larger number of
-redshifts at higher precision, the command above can be slow (try it out to
-see).
-
-This slowness of the repeated calls to a generic program (like
-CosmicCalculator), is because it can have a lot of overhead on each
-call. To be generic and easy to operate, it has to parse the command-line
-and all configuration files (see @ref{Option management and configuration
-files}) which contain human-readable characters and need a lot of
-pre-processing to be ready for processing by the computer. Afterwards,
-CosmicCalculator has to check the sanity of its inputs and check which of
-its many options you have asked for. All the this pre-processing takes as
-much time as the high-level calculation you are requesting, and it has to
-re-do all of these for every redshift in your loop.
-
-To greatly speed up the processing, you can directly access the core
-work-horse of CosmicCalculator without all that overhead by designing your
-custom program for this job. Using Gnuastro's library, you can write your
-own tiny program particularly designed for this exact calculation (and
-nothing else!). To do that, copy and paste the following C program in a
-file called @file{myprogram.c}.
+In @ref{Cosmological coverage}, we repeated a certain calculation/output of a 
program multiple times using the shell's @code{for} loop.
+This simple way repeating a calculation is great when it is only necessary 
once.
+However, if you commonly need this calculation and possibly for a larger 
number of redshifts at higher precision, the command above can be slow (try it 
out to see).
+
+This slowness of the repeated calls to a generic program (like 
CosmicCalculator), is because it can have a lot of overhead on each call.
+To be generic and easy to operate, it has to parse the command-line and all 
configuration files (see @ref{Option management and configuration files}) which 
contain human-readable characters and need a lot of pre-processing to be ready 
for processing by the computer.
+Afterwards, CosmicCalculator has to check the sanity of its inputs and check 
which of its many options you have asked for.
+All the this pre-processing takes as much time as the high-level calculation 
you are requesting, and it has to re-do all of these for every redshift in your 
loop.
+
+To greatly speed up the processing, you can directly access the core 
work-horse of CosmicCalculator without all that overhead by designing your 
custom program for this job.
+Using Gnuastro's library, you can write your own tiny program particularly 
designed for this exact calculation (and nothing else!).
+To do that, copy and paste the following C program in a file called 
@file{myprogram.c}.
 
 @example
 #include <math.h>
@@ -2958,46 +2337,31 @@ $ astbuildprog myprogram.c
 @end example
 
 @noindent
-In the command above, you used Gnuastro's BuildProgram program. Its job is
-to greatly simplify the compilation, linking and running of simple C
-programs that use Gnuastro's library (like this one). BuildProgram is
-designed to manage Gnuastro's dependencies, compile and link your custom
-program and then run it.
+In the command above, you used Gnuastro's BuildProgram program.
+Its job is to greatly simplify the compilation, linking and running of simple 
C programs that use Gnuastro's library (like this one).
+BuildProgram is designed to manage Gnuastro's dependencies, compile and link 
your custom program and then run it.
 
-Did you notice how your custom program was much faster than the repeated
-calls to CosmicCalculator in the previous section? You might have noticed
-that a new file called @file{myprogram} is also created in the
-directory. This is the compiled program that was created and run by the
-command above (its in binary machine code format, not human-readable any
-more). You can run it again to get the same results with a command like
-this:
+Did you notice how your custom program was much faster than the repeated calls 
to CosmicCalculator in the previous section? You might have noticed that a new 
file called @file{myprogram} is also created in the directory.
+This is the compiled program that was created and run by the command above 
(its in binary machine code format, not human-readable any more).
+You can run it again to get the same results with a command like this:
 
 @example
 $ ./myprogram
 @end example
 
-The efficiency of your custom @file{myprogram} compared to repeated calls
-to CosmicCalculator is because in the latter, the requested processing is
-comparable to the necessary overheads. For other programs that take large
-input datasets and do complicated processing on them, the overhead is
-usually negligible compared to the processing. In such cases, the libraries
-are only useful if you want a different/new processing compared to the
-functionalities in Gnuastro's existing programs.
+The efficiency of your custom @file{myprogram} compared to repeated calls to 
CosmicCalculator is because in the latter, the requested processing is 
comparable to the necessary overheads.
+For other programs that take large input datasets and do complicated 
processing on them, the overhead is usually negligible compared to the 
processing.
+In such cases, the libraries are only useful if you want a different/new 
processing compared to the functionalities in Gnuastro's existing programs.
 
-Gnuastro has a large library which is used extensively by all the
-programs. In other words, the library is like the skeleton of Gnuastro. For
-the full list of available functions classified by context, please see
-@ref{Gnuastro library}. Gnuastro's library and BuildProgram are created to
-make it easy for you to use these powerful features as you like. This gives
-you a high level of creativity, while also providing efficiency and
-robustness. Several other complete working examples (involving images and
-tables) of Gnuastro's libraries can be see in @ref{Library demo
-programs}.
+Gnuastro has a large library which is used extensively by all the programs.
+In other words, the library is like the skeleton of Gnuastro.
+For the full list of available functions classified by context, please see 
@ref{Gnuastro library}.
+Gnuastro's library and BuildProgram are created to make it easy for you to use 
these powerful features as you like.
+This gives you a high level of creativity, while also providing efficiency and 
robustness.
+Several other complete working examples (involving images and tables) of 
Gnuastro's libraries can be see in @ref{Library demo programs}.
 
-But for this tutorial, let's stop discussing the libraries at this point in
-and get back to Gnuastro's already built programs which don't need any
-programming. But before continuing, let's clean up the files we don't need
-any more:
+But for this tutorial, let's stop discussing the libraries at this point in 
and get back to Gnuastro's already built programs which don't need any 
programming.
+But before continuing, let's clean up the files we don't need any more:
 
 @example
 $ rm myprogram*
@@ -3006,61 +2370,43 @@ $ rm myprogram*
 
 @node Option management and configuration files, Warping to a new pixel grid, 
Building custom programs with the library, General program usage tutorial
 @subsection Option management and configuration files
-None of Gnuastro's programs keep a default value internally within their
-code. However, when you ran CosmicCalculator only with the @option{-z2}
-option (not specifying the cosmological parameters) in @ref{Cosmological
-coverage}, it completed its processing and printed results. Where did the
-necessary cosmological parameters (like the matter density and etc) that
-are necessary for its calculations come from? Fast reply: the values come
-from a configuration file (see @ref{Configuration file precedence}).
-
-CosmicCalculator is a small program with a limited set of
-parameters/options. Therefore, let's use it to discuss configuration files
-in Gnuastro (for more, you can always see @ref{Configuration
-files}). Configuration files are an important part of all Gnuastro's
-programs, especially the ones with a large number of options, so its
-important to understand this part well .
-
-Once you get comfortable with configuration files here, you can make good
-use of them in all Gnuastro programs (for example, NoiseChisel). For
-example, to do optimal detection on various datasets, you can have
-configuration files for different noise properties. The configuration of
-each program (besides its version) is vital for the reproducibility of your
-results, so it is important to manage them properly.
-
-As we saw above, the full list of the options in all Gnuastro programs can
-be seen with the @option{--help} option. Try calling it with
-CosmicCalculator as shown below. Note how options are grouped by context to
-make it easier to find your desired option. However, in each group, options
-are ordered alphabetically.
+None of Gnuastro's programs keep a default value internally within their code.
+However, when you ran CosmicCalculator only with the @option{-z2} option (not 
specifying the cosmological parameters) in @ref{Cosmological coverage}, it 
completed its processing and printed results.
+Where did the necessary cosmological parameters (like the matter density, etc) 
that are necessary for its calculations come from? Fast reply: the values come 
from a configuration file (see @ref{Configuration file precedence}).
+
+CosmicCalculator is a small program with a limited set of parameters/options.
+Therefore, let's use it to discuss configuration files in Gnuastro (for more, 
you can always see @ref{Configuration files}).
+Configuration files are an important part of all Gnuastro's programs, 
especially the ones with a large number of options, so its important to 
understand this part well .
+
+Once you get comfortable with configuration files here, you can make good use 
of them in all Gnuastro programs (for example, NoiseChisel).
+For example, to do optimal detection on various datasets, you can have 
configuration files for different noise properties.
+The configuration of each program (besides its version) is vital for the 
reproducibility of your results, so it is important to manage them properly.
+
+As we saw above, the full list of the options in all Gnuastro programs can be 
seen with the @option{--help} option.
+Try calling it with CosmicCalculator as shown below.
+Note how options are grouped by context to make it easier to find your desired 
option.
+However, in each group, options are ordered alphabetically.
 
 @example
 $ astcosmiccal --help
 @end example
 
 @noindent
-The options that need a value have an @key{=} sign after their long version
-and @code{FLT}, @code{INT} or @code{STR} for floating point numbers,
-integer numbers, and strings (filenames for example) respectively. All
-options have a long format and some have a short format (a single
-character), for more see @ref{Options}.
+The options that need a value have an @key{=} sign after their long version 
and @code{FLT}, @code{INT} or @code{STR} for floating point numbers, integer 
numbers, and strings (filenames for example) respectively.
+All options have a long format and some have a short format (a single 
character), for more see @ref{Options}.
 
-When you are using a program, it is often necessary to check the value the
-option has just before the program starts its processing. In other words,
-after it has parsed the command-line options and all configuration
-files. You can see the values of all options that need one with the
-@option{--printparams} or @code{-P} option. @option{--printparams} is
-common to all programs (see @ref{Common options}). In the command below,
-try replacing @code{-P} with @option{--printparams} to see how both do the
-same operation.
+When you are using a program, it is often necessary to check the value the 
option has just before the program starts its processing.
+In other words, after it has parsed the command-line options and all 
configuration files.
+You can see the values of all options that need one with the 
@option{--printparams} or @code{-P} option.
+@option{--printparams} is common to all programs (see @ref{Common options}).
+In the command below, try replacing @code{-P} with @option{--printparams} to 
see how both do the same operation.
 
 @example
 $ astcosmiccal -P
 @end example
 
-Let's say you want a different Hubble constant. Try running the following
-command (just adding @option{--H0=70} after the command above) to see how
-the Hubble constant in the output of the command above has changed.
+Let's say you want a different Hubble constant.
+Try running the following command (just adding @option{--H0=70} after the 
command above) to see how the Hubble constant in the output of the command 
above has changed.
 
 @example
 $ astcosmiccal -P --H0=70
@@ -3074,12 +2420,9 @@ calculations with the new cosmology (or configuration).
 $ astcosmiccal --H0=70 -z2
 @end example
 
-From the output of the @code{--help} option, note how the option for Hubble
-constant has both short (@code{-H}) and long (@code{--H0}) formats. One
-final note is that the equal (@key{=}) sign is not mandatory. In the short
-format, the value can stick to the actual option (the short option name is
-just one character after-all, thus easily identifiable) and in the long
-format, a white-space character is also enough.
+From the output of the @code{--help} option, note how the option for Hubble 
constant has both short (@code{-H}) and long (@code{--H0}) formats.
+One final note is that the equal (@key{=}) sign is not mandatory.
+In the short format, the value can stick to the actual option (the short 
option name is just one character after-all, thus easily identifiable) and in 
the long format, a white-space character is also enough.
 
 @example
 $ astcosmiccal -H70    -z2
@@ -3087,35 +2430,28 @@ $ astcosmiccal --H0 70 -z2 --arcsectandist
 @end example
 
 @noindent
-When an option dosn't need a value, and has a short format (like
-@option{--arcsectandist}), you can easily append it @emph{before} other
-short options. So the last command above can also be written as:
+When an option doesn't need a value, and has a short format (like 
@option{--arcsectandist}), you can easily append it @emph{before} other short 
options.
+So the last command above can also be written as:
 
 @example
 $ astcosmiccal --H0 70 -sz2
 @end example
 
-Let's assume that in one project, you want to only use rounded cosmological
-parameters (H0 of 70km/s/Mpc and matter density of 0.3). You should
-therefore run CosmicCalculator like this:
+Let's assume that in one project, you want to only use rounded cosmological 
parameters (H0 of 70km/s/Mpc and matter density of 0.3).
+You should therefore run CosmicCalculator like this:
 
 @example
 $ astcosmiccal --H0=70 --olambda=0.7 --omatter=0.3 -z2
 @end example
 
-But having to type these extra options every time you run CosmicCalculator
-will be prone to errors (typos in particular), frustrating and
-slow. Therefore in Gnuastro, you can put all the options and their values
-in a ``Configuration file'' and tell the programs to read the option values
-from there.
+But having to type these extra options every time you run CosmicCalculator 
will be prone to errors (typos in particular), frustrating and slow.
+Therefore in Gnuastro, you can put all the options and their values in a 
``Configuration file'' and tell the programs to read the option values from 
there.
 
-Let's create a configuration file... With your favorite text editor, make a
-file named @file{my-cosmology.conf} (or @file{my-cosmology.txt}, the suffix
-doesn't matter, but a more descriptive suffix like @file{.conf} is
-recommended). Then put the following lines inside of it. One space between
-the option value and name is enough, the values are just under each other
-to help in readability. Also note that you can only use long option names
-in configuration files.
+Let's create a configuration file...
+With your favorite text editor, make a file named @file{my-cosmology.conf} (or 
@file{my-cosmology.txt}, the suffix doesn't matter, but a more descriptive 
suffix like @file{.conf} is recommended).
+Then put the following lines inside of it.
+One space between the option value and name is enough, the values are just 
under each other to help in readability.
+Also note that you can only use long option names in configuration files.
 
 @example
 H0       70
@@ -3124,31 +2460,23 @@ omatter  0.3
 @end example
 
 @noindent
-You can now tell CosmicCalculator to read this file for option values
-immediately using the @option{--config} option as shown below. Do you see
-how the output of the following command corresponds to the option values in
-@file{my-cosmology.conf}, and is therefore identical to the previous
-command?
+You can now tell CosmicCalculator to read this file for option values 
immediately using the @option{--config} option as shown below.
+Do you see how the output of the following command corresponds to the option 
values in @file{my-cosmology.conf}, and is therefore identical to the previous 
command?
 
 @example
 $ astcosmiccal --config=my-cosmology.conf -z2
 @end example
 
-But still, having to type @option{--config=my-cosmology.conf} everytime is
-annoying, isn't it? If you need this cosmology every time you are working
-in a specific directory, you can use Gnuastro's default configuration file
-names and avoid having to type it manually.
+But still, having to type @option{--config=my-cosmology.conf} every time is 
annoying, isn't it?
+If you need this cosmology every time you are working in a specific directory, 
you can use Gnuastro's default configuration file names and avoid having to 
type it manually.
 
-The default configuration files (that are checked if they exist) must be
-placed in the hidden @file{.gnuastro} sub-directory (in the same directory
-you are running the program). Their file name (within @file{.gnuastro})
-must also be the same as the program's executable name. So in the case of
-CosmicCalculator, the default configuration file in a given directory is
-@file{.gnuastro/astcosmiccal.conf}.
+The default configuration files (that are checked if they exist) must be 
placed in the hidden @file{.gnuastro} sub-directory (in the same directory you 
are running the program).
+Their file name (within @file{.gnuastro}) must also be the same as the 
program's executable name.
+So in the case of CosmicCalculator, the default configuration file in a given 
directory is @file{.gnuastro/astcosmiccal.conf}.
 
-Let's do this. We'll first make a directory for our custom cosmology, then
-build a @file{.gnuastro} within it. Finally, we'll copy the custom
-configuration file there:
+Let's do this.
+We'll first make a directory for our custom cosmology, then build a 
@file{.gnuastro} within it.
+Finally, we'll copy the custom configuration file there:
 
 @example
 $ mkdir my-cosmology
@@ -3156,9 +2484,7 @@ $ mkdir my-cosmology/.gnuastro
 $ mv my-cosmology.conf my-cosmology/.gnuastro/astcosmiccal.conf
 @end example
 
-Once you run CosmicCalculator within @file{my-cosmology} (as shown below),
-you will see how your custom cosmology has been implemented without having
-to type anything extra on the command-line.
+Once you run CosmicCalculator within @file{my-cosmology} (as shown below), you 
will see how your custom cosmology has been implemented without having to type 
anything extra on the command-line.
 
 @example
 $ cd my-cosmology
@@ -3166,11 +2492,9 @@ $ astcosmiccal -P
 $ cd ..
 @end example
 
-To further simplify the process, you can use the @option{--setdirconf}
-option. If you are already in your desired working directory, calling this
-option with the others will automatically write the final values (along
-with descriptions) in @file{.gnuastro/astcosmiccal.conf}. For example try
-the commands below:
+To further simplify the process, you can use the @option{--setdirconf} option.
+If you are already in your desired working directory, calling this option with 
the others will automatically write the final values (along with descriptions) 
in @file{.gnuastro/astcosmiccal.conf}.
+For example try the commands below:
 
 @example
 $ mkdir my-cosmology2
@@ -3181,18 +2505,14 @@ $ astcosmiccal -P
 $ cd ..
 @end example
 
-Gnuastro's programs also have default configuration files for a specific
-user (when run in any directory). This allows you to set a special behavior
-every time a program is run by a specific user. Only the directory and
-filename differ from the above, the rest of the process is similar to
-before. Finally, there are also system-wide configuration files that can be
-used to define the option values for all users on a system. See
-@ref{Configuration file precedence} for a more detailed discussion.
+Gnuastro's programs also have default configuration files for a specific user 
(when run in any directory).
+This allows you to set a special behavior every time a program is run by a 
specific user.
+Only the directory and filename differ from the above, the rest of the process 
is similar to before.
+Finally, there are also system-wide configuration files that can be used to 
define the option values for all users on a system.
+See @ref{Configuration file precedence} for a more detailed discussion.
 
-We'll stop the discussion on configuration files here, but you can always
-read about them in @ref{Configuration files}. Before continuing the
-tutorial, let's delete the two extra directories that we don't need any
-more:
+We'll stop the discussion on configuration files here, but you can always read 
about them in @ref{Configuration files}.
+Before continuing the tutorial, let's delete the two extra directories that we 
don't need any more:
 
 @example
 $ rm -rf my-cosmology*
@@ -3201,38 +2521,30 @@ $ rm -rf my-cosmology*
 
 @node Warping to a new pixel grid, Multiextension FITS files NoiseChisel's 
output, Option management and configuration files, General program usage 
tutorial
 @subsection Warping to a new pixel grid
-We are now ready to start processing the downloaded images. The XDF
-datasets we are using here are already aligned to the same pixel
-grid. However, warping to a different/matched pixel grid is commonly needed
-before higher-level analysis when you are using datasets from different
-instruments. So let's have a look at Gnuastro's features warping features
-here.
+We are now ready to start processing the downloaded images.
+The XDF datasets we are using here are already aligned to the same pixel grid.
+However, warping to a different/matched pixel grid is commonly needed before 
higher-level analysis when you are using datasets from different instruments.
+So let's have a look at Gnuastro's features warping features here.
 
-Gnuastro's Warp program should be used for warping the pixel-grid (see
-@ref{Warp}). For example, try rotating one of the images by 20 degrees:
+Gnuastro's Warp program should be used for warping the pixel-grid (see 
@ref{Warp}).
+For example, try rotating one of the images by 20 degrees:
 
 @example
 $ astwarp flat-ir/xdf-f160w.fits --rotate=20
 @end example
 
 @noindent
-Open the output (@file{xdf-f160w_rotated.fits}) and see how it is
-rotated. If your final image is already aligned with RA and Dec, you can
-simply use the @option{--align} option and let Warp calculate the necessary
-rotation and apply it. For example, try aligning the rotated image back to
-the standard orientation (just note that because of the two rotations, the
-NaN parts of the image are larger now):
+Open the output (@file{xdf-f160w_rotated.fits}) and see how it is rotated.
+If your final image is already aligned with RA and Dec, you can simply use the 
@option{--align} option and let Warp calculate the necessary rotation and apply 
it.
+For example, try aligning the rotated image back to the standard orientation 
(just note that because of the two rotations, the NaN parts of the image are 
larger now):
 
 @example
 $ astwarp xdf-f160w_rotated.fits --align
 @end example
 
-Warp can generally be used for many kinds of pixel grid manipulation
-(warping), not just rotations. For example the outputs of the commands
-below will respectively have larger pixels (new resolution being one
-quarter the original resolution), get shifted by 2.8 (by sub-pixel), get a
-shear of 2, and be tilted (projected). Run each of them and open the output
-file to see the effect, they will become handy for you in the future.
+Warp can generally be used for many kinds of pixel grid manipulation 
(warping), not just rotations.
+For example the outputs of the commands below will respectively have larger 
pixels (new resolution being one quarter the original resolution), get shifted 
by 2.8 (by sub-pixel), get a shear of 2, and be tilted (projected).
+Run each of them and open the output file to see the effect, they will become 
handy for you in the future.
 
 @example
 $ astwarp flat-ir/xdf-f160w.fits --scale=0.25
@@ -3242,34 +2554,24 @@ $ astwarp flat-ir/xdf-f160w.fits --project=0.001,0.0005
 @end example
 
 @noindent
-If you need to do multiple warps, you can combine them in one call to
-Warp. For example to first rotate the image, then scale it, run this
-command:
+If you need to do multiple warps, you can combine them in one call to Warp.
+For example to first rotate the image, then scale it, run this command:
 
 @example
 $ astwarp flat-ir/xdf-f160w.fits --rotate=20 --scale=0.25
 @end example
 
-If you have multiple warps, do them all in one command. Don't warp them in
-separate commands because the correlated noise will become too strong. As
-you see in the matrix that is printed when you run Warp, it merges all the
-warps into a single warping matrix (see @ref{Merging multiple warpings})
-and simply applies that (mixes the pixel values) just once. However, if you
-run Warp multiple times, the pixels will be mixed multiple times, creating
-a strong artificial blur/smoothing, or stronger correlated noise.
+If you have multiple warps, do them all in one command.
+Don't warp them in separate commands because the correlated noise will become 
too strong.
+As you see in the matrix that is printed when you run Warp, it merges all the 
warps into a single warping matrix (see @ref{Merging multiple warpings}) and 
simply applies that (mixes the pixel values) just once.
+However, if you run Warp multiple times, the pixels will be mixed multiple 
times, creating a strong artificial blur/smoothing, or stronger correlated 
noise.
 
-Recall that the merging of multiple warps is done through matrix
-multiplication, therefore order matters in the separate operations. At a
-lower level, through Warp's @option{--matrix} option, you can directly
-request your desired final warp and don't have to break it up into
-different warps like above (see @ref{Invoking astwarp}).
+Recall that the merging of multiple warps is done through matrix 
multiplication, therefore order matters in the separate operations.
+At a lower level, through Warp's @option{--matrix} option, you can directly 
request your desired final warp and don't have to break it up into different 
warps like above (see @ref{Invoking astwarp}).
 
-Fortunately these datasets are already aligned to the same pixel grid, so
-you don't actually need the files that were just generated. You can safely
-delete them all with the following command. Here, you see why we put the
-processed outputs that we need later into a separate directory. In this
-way, the top directory can be used for temporary files for testing that you
-can simply delete with a generic command like below.
+Fortunately these datasets are already aligned to the same pixel grid, so you 
don't actually need the files that were just generated.You can safely delete 
them all with the following command.
+Here, you see why we put the processed outputs that we need later into a 
separate directory.
+In this way, the top directory can be used for temporary files for testing 
that you can simply delete with a generic command like below.
 
 @example
 $ rm *.fits
@@ -3278,80 +2580,51 @@ $ rm *.fits
 
 @node Multiextension FITS files NoiseChisel's output, NoiseChisel optimization 
for detection, Warping to a new pixel grid, General program usage tutorial
 @subsection Multiextension FITS files (NoiseChisel's output)
-Having completed a review of the basics in the previous sections, we are
-now ready to separate the signal (galaxies or stars) from the background
-noise in the image. We will be using the results of @ref{Dataset inspection
-and cropping}, so be sure you already have them. Gnuastro has NoiseChisel
-for this job. But NoiseChisel's output is a multi-extension FITS file,
-therefore to better understand how to use NoiseChisel, let's take a look at
-multi-extension FITS files and how you can interact with them.
+Having completed a review of the basics in the previous sections, we are now 
ready to separate the signal (galaxies or stars) from the background noise in 
the image.
+We will be using the results of @ref{Dataset inspection and cropping}, so be 
sure you already have them.
+Gnuastro has NoiseChisel for this job.
+But NoiseChisel's output is a multi-extension FITS file, therefore to better 
understand how to use NoiseChisel, let's take a look at multi-extension FITS 
files and how you can interact with them.
 
-In the FITS format, each extension contains a separate dataset (image in
-this case). You can get basic information about the extensions in a FITS
-file with Gnuastro's Fits program (see @ref{Fits}). To start with, let's
-run NoiseChisel without any options, then use Gnuastro's FITS program to
-inspect the number of extensions in this file.
+In the FITS format, each extension contains a separate dataset (image in this 
case).
+You can get basic information about the extensions in a FITS file with 
Gnuastro's Fits program (see @ref{Fits}).
+To start with, let's run NoiseChisel without any options, then use Gnuastro's 
FITS program to inspect the number of extensions in this file.
 
 @example
 $ astnoisechisel flat-ir/xdf-f160w.fits
 $ astfits xdf-f160w_detected.fits
 @end example
 
-From the output list, we see that NoiseChisel's output contains 5
-extensions and the first (counting from zero, with name
-@code{NOISECHISEL-CONFIG}) is empty: it has value of @code{0} in the last
-column (which shows its size). The first extension in all the outputs of
-Gnuastro's programs only contains meta-data: data about/describing the
-datasets within (all) the output's extensions. This is recommended by the
-FITS standard, see @ref{Fits} for more. In the case of Gnuastro's programs,
-this generic zero-th/meta-data extension (for the whole file) contains all
-the configuration options of the program that created the file.
-
-The second extension of NoiseChisel's output (numbered 1, named
-@code{INPUT-NO-SKY}) is the Sky-subtracted input that you provided. The
-third (@code{DETECTIONS}) is NoiseChisel's main output which is a binary
-image with only two possible values for all pixels: 0 for noise and 1 for
-signal. Since it only has two values, to avoid taking too much space on
-your computer, its numeric datatype an unsigned 8-bit integer (or
-@code{uint8})@footnote{To learn more about numeric data types see
-@ref{Numeric data types}.}. The fourth and fifth (@code{SKY} and
-@code{SKY_STD}) extensions, have the Sky and its standard deviation values
-for the input on a tile grid and were calculated over the undetected
-regions (for more on the importance of the Sky value, see @ref{Sky value}).
-
-Metadata regarding how the analysis was done (or a dataset was created) is
-very important for higher-level analysis and reproducibility. Therefore,
-Let's first take a closer look at the @code{NOISECHISEL-CONFIG}
-extension. If you specify a special header in the FITS file, Gnuastro's
-Fits program will print the header keywords (metadata) of that
-extension. You can either specify the HDU/extension counter (starting from
-0), or name. Therefore, the two commands below are identical for this file:
+From the output list, we see that NoiseChisel's output contains 5 extensions 
and the first (counting from zero, with name @code{NOISECHISEL-CONFIG}) is 
empty: it has value of @code{0} in the last column (which shows its size).
+The first extension in all the outputs of Gnuastro's programs only contains 
meta-data: data about/describing the datasets within (all) the output's 
extensions.
+This is recommended by the FITS standard, see @ref{Fits} for more.
+In the case of Gnuastro's programs, this generic zero-th/meta-data extension 
(for the whole file) contains all the configuration options of the program that 
created the file.
+
+The second extension of NoiseChisel's output (numbered 1, named 
@code{INPUT-NO-SKY}) is the Sky-subtracted input that you provided.
+The third (@code{DETECTIONS}) is NoiseChisel's main output which is a binary 
image with only two possible values for all pixels: 0 for noise and 1 for 
signal.
+Since it only has two values, to avoid taking too much space on your computer, 
its numeric datatype an unsigned 8-bit integer (or @code{uint8})@footnote{To 
learn more about numeric data types see @ref{Numeric data types}.}.
+The fourth and fifth (@code{SKY} and @code{SKY_STD}) extensions, have the Sky 
and its standard deviation values for the input on a tile grid and were 
calculated over the undetected regions (for more on the importance of the Sky 
value, see @ref{Sky value}).
+
+Metadata regarding how the analysis was done (or a dataset was created) is 
very important for higher-level analysis and reproducibility.
+Therefore, Let's first take a closer look at the @code{NOISECHISEL-CONFIG} 
extension.
+If you specify a special header in the FITS file, Gnuastro's Fits program will 
print the header keywords (metadata) of that extension.
+You can either specify the HDU/extension counter (starting from 0), or name.
+Therefore, the two commands below are identical for this file:
 
 @example
 $ astfits xdf-f160w_detected.fits -h0
 $ astfits xdf-f160w_detected.fits -hNOISECHISEL-CONFIG
 @end example
 
-The first group of FITS header keywords are standard keywords (containing
-the @code{SIMPLE} and @code{BITPIX} keywords the first empty line). They
-are required by the FITS standard and must be present in any FITS
-extension. The second group contains the input file and all the options
-with their values in that run of NoiseChisel. Finally, the last group
-contains the date and version information of Gnuastro and its
-dependencies. The ``versions and date'' group of keywords are present in
-all Gnuastro's FITS extension outputs, for more see @ref{Output FITS
-files}.
-
-Note that if a keyword name is larger than 8 characters, it is preceded by
-a @code{HIERARCH} keyword and that all keyword names are in capital
-letters. Therefore, if you want to see only one keyword's value by feeding
-the output to Grep, you should ask Grep to ignore case with its @option{-i}
-option (short name for @option{--ignore-case}). For example, below we'll
-check the value to the @option{--snminarea} option, note how we don't need
-Grep's @option{-i} option when it is fed with @command{astnoisechisel -P}
-since it is already in small-caps there. The extra white spaces in the
-first command are only to help in readability, you can ignore them when
-typing.
+The first group of FITS header keywords are standard keywords (containing the 
@code{SIMPLE} and @code{BITPIX} keywords the first empty line).
+They are required by the FITS standard and must be present in any FITS 
extension.
+The second group contains the input file and all the options with their values 
in that run of NoiseChisel.
+Finally, the last group contains the date and version information of Gnuastro 
and its dependencies.
+The ``versions and date'' group of keywords are present in all Gnuastro's FITS 
extension outputs, for more see @ref{Output FITS files}.
+
+Note that if a keyword name is larger than 8 characters, it is preceded by a 
@code{HIERARCH} keyword and that all keyword names are in capital letters.
+Therefore, if you want to see only one keyword's value by feeding the output 
to Grep, you should ask Grep to ignore case with its @option{-i} option (short 
name for @option{--ignore-case}).
+For example, below we'll check the value to the @option{--snminarea} option, 
note how we don't need Grep's @option{-i} option when it is fed with 
@command{astnoisechisel -P} since it is already in small-caps there.
+The extra white spaces in the first command are only to help in readability, 
you can ignore them when typing.
 
 @example
 $ astnoisechisel -P                   | grep    snminarea
@@ -3359,78 +2632,51 @@ $ astfits xdf-f160w_detected.fits -h0 | grep -i 
snminarea
 @end example
 
 @noindent
-The metadata (that is stored in the output) can later be used to exactly
-reproduce/understand your result, even if you have lost/forgot the command
-you used to create the file. This feature is present in all of Gnuastro's
-programs, not just NoiseChisel.
+The metadata (that is stored in the output) can later be used to exactly 
reproduce/understand your result, even if you have lost/forgot the command you 
used to create the file.
+This feature is present in all of Gnuastro's programs, not just NoiseChisel.
 
 @cindex DS9
 @cindex GNOME
 @cindex SAO DS9
-Let's continue with the extensions in NoiseChisel's output that contain a
-dataset by visually inspecting them (here, we'll use SAO DS9). Since the
-file contains multiple related extensions, the easiest way to view all of
-them in DS9 is to open the file as a ``Multi-extension data cube'' with the
-@option{-mecube} option as shown below@footnote{You can configure your
-graphic user interface to open DS9 in multi-extension cube mode by default
-when using the GUI (double clicking on the file). If your graphic user
-interface is GNOME (another GNU software, it is most common in GNU/Linux
-operating systems), a full description is given in @ref{Viewing
-multiextension FITS images}}.
+Let's continue with the extensions in NoiseChisel's output that contain a 
dataset by visually inspecting them (here, we'll use SAO DS9).
+Since the file contains multiple related extensions, the easiest way to view 
all of them in DS9 is to open the file as a ``Multi-extension data cube'' with 
the @option{-mecube} option as shown below@footnote{You can configure your 
graphic user interface to open DS9 in multi-extension cube mode by default when 
using the GUI (double clicking on the file).
+If your graphic user interface is GNOME (another GNU software, it is most 
common in GNU/Linux operating systems), a full description is given in 
@ref{Viewing multiextension FITS images}}.
 
 @example
 $ ds9 -mecube xdf-f160w_detected.fits -zscale -zoom to fit
 @end example
 
-A ``cube'' window opens along with DS9's main window. The buttons and
-horizontal scroll bar in this small new window can be used to navigate
-between the extensions. In this mode, all DS9's settings (for example zoom
-or color-bar) will be identical between the extensions. Try zooming into to
-one part and flipping through the extensions to see how the galaxies were
-detected along with the Sky and Sky standard deviation values for that
-region. Just have in mind that NoiseChisel's job is @emph{only} detection
-(separating signal from noise), We'll do segmentation on this result later
-to find the individual galaxies/peaks over the detected pixels.
+A ``cube'' window opens along with DS9's main window.
+The buttons and horizontal scroll bar in this small new window can be used to 
navigate between the extensions.
+In this mode, all DS9's settings (for example zoom or color-bar) will be 
identical between the extensions.
+Try zooming into to one part and flipping through the extensions to see how 
the galaxies were detected along with the Sky and Sky standard deviation values 
for that region.
+Just have in mind that NoiseChisel's job is @emph{only} detection (separating 
signal from noise), We'll do segmentation on this result later to find the 
individual galaxies/peaks over the detected pixels.
 
-Each HDU/extension in a FITS file is an independent dataset (image or
-table) which you can delete from the FITS file, or copy/cut to another
-file. For example, with the command below, you can copy NoiseChisel's
-@code{DETECTIONS} HDU/extension to another file:
+Each HDU/extension in a FITS file is an independent dataset (image or table) 
which you can delete from the FITS file, or copy/cut to another file.
+For example, with the command below, you can copy NoiseChisel's 
@code{DETECTIONS} HDU/extension to another file:
 
 @example
 $ astfits xdf-f160w_detected.fits --copy=DETECTIONS -odetections.fits
 @end example
 
-There are similar options to conveniently cut (@option{--cut}, copy, then
-remove from the input) or delete (@option{--remove}) HDUs from a FITS file
-also. See @ref{HDU manipulation} for more.
+There are similar options to conveniently cut (@option{--cut}, copy, then 
remove from the input) or delete (@option{--remove}) HDUs from a FITS file also.
+See @ref{HDU manipulation} for more.
 
 
 
 @node NoiseChisel optimization for detection, NoiseChisel optimization for 
storage, Multiextension FITS files NoiseChisel's output, General program usage 
tutorial
 @subsection NoiseChisel optimization for detection
-In @ref{Multiextension FITS files NoiseChisel's output}, we ran NoiseChisel
-and reviewed NoiseChisel's output format. Now that you have a better
-feeling for multi-extension FITS files, let's optimize NoiseChisel for this
-particular dataset.
-
-One good way to see if you have missed any signal (small galaxies, or the
-wings of brighter galaxies) is to mask all the detected pixels and inspect
-the noise pixels. For this, you can use Gnuastro's Arithmetic program (in
-particular its @code{where} operator, see @ref{Arithmetic operators}). The
-command below will produce @file{mask-det.fits}. In it, all the pixels in
-the @code{INPUT-NO-SKY} extension that are flagged 1 in the
-@code{DETECTIONS} extension (dominated by signal, not noise) will be set to
-NaN.
-
-Since the various extensions are in the same file, for each dataset we need
-the file and extension name. To make the command easier to
-read/write/understand, let's use shell variables: `@code{in}' will be used
-for the Sky-subtracted input image and `@code{det}' will be used for the
-detection map. Recall that a shell variable's value can be retrieved by
-adding a @code{$} before its name, also note that the double quotations are
-necessary when we have white-space characters in a variable name (like this
-case).
+In @ref{Multiextension FITS files NoiseChisel's output}, we ran NoiseChisel 
and reviewed NoiseChisel's output format.
+Now that you have a better feeling for multi-extension FITS files, let's 
optimize NoiseChisel for this particular dataset.
+
+One good way to see if you have missed any signal (small galaxies, or the 
wings of brighter galaxies) is to mask all the detected pixels and inspect the 
noise pixels.
+For this, you can use Gnuastro's Arithmetic program (in particular its 
@code{where} operator, see @ref{Arithmetic operators}).
+The command below will produce @file{mask-det.fits}.
+In it, all the pixels in the @code{INPUT-NO-SKY} extension that are flagged 1 
in the @code{DETECTIONS} extension (dominated by signal, not noise) will be set 
to NaN.
+
+Since the various extensions are in the same file, for each dataset we need 
the file and extension name.
+To make the command easier to read/write/understand, let's use shell 
variables: `@code{in}' will be used for the Sky-subtracted input image and 
`@code{det}' will be used for the detection map.
+Recall that a shell variable's value can be retrieved by adding a @code{$} 
before its name, also note that the double quotations are necessary when we 
have white-space characters in a variable name (like this case).
 
 @example
 $ in="xdf-f160w_detected.fits -hINPUT-NO-SKY"
@@ -3439,187 +2685,130 @@ $ astarithmetic $in $det nan where 
--output=mask-det.fits
 @end example
 
 @noindent
-To invert the result (only keep the detected pixels), you can flip the
-detection map (from 0 to 1 and vice-versa) by adding a `@code{not}' after
-the second @code{$det}:
+To invert the result (only keep the detected pixels), you can flip the 
detection map (from 0 to 1 and vice-versa) by adding a `@code{not}' after the 
second @code{$det}:
 
 @example
 $ astarithmetic $in $det not nan where --output=mask-sky.fits
 @end example
 
-Looking again at the detected pixels, we see that there are thin
-connections between many of the smaller objects or extending from larger
-objects. This shows that we have dug in too deep, and that we are following
-correlated noise.
-
-Correlated noise is created when we warp datasets from individual exposures
-(that are each slightly offset compared to each other) into the same pixel
-grid, then add them to form the final result. Because it mixes nearby pixel
-values, correlated noise is a form of convolution and it smooths the
-image. In terms of the number of exposures (and thus correlated noise), the
-XDF dataset is by no means an ordinary dataset. It is the result of warping
-and adding roughly 80 separate exposures which can create strong correlated
-noise/smoothing. In common surveys the number of exposures is usually 10 or
-less.
-
-Let's tweak NoiseChisel's configuration a little to get a better result on
-this dataset. Don't forget that ``@emph{Good statistical analysis is not a
-purely routine matter, and generally calls for more than one pass through
-the computer}'' (Anscombe 1973, see @ref{Science and its tools}). A good
-scientist must have a good understanding of her tools to make a meaningful
-analysis. So don't hesitate in playing with the default configuration and
-reviewing the manual when you have a new dataset in front of you. Robust
-data analysis is an art, therefore a good scientist must first be a good
-artist.
-
-NoiseChisel can produce ``Check images'' to help you visualize and inspect
-how each step is done. You can see all the check images it can produce with
-this command.
+Looking again at the detected pixels, we see that there are thin connections 
between many of the smaller objects or extending from larger objects.
+This shows that we have dug in too deep, and that we are following correlated 
noise.
+
+Correlated noise is created when we warp datasets from individual exposures 
(that are each slightly offset compared to each other) into the same pixel 
grid, then add them to form the final result.
+Because it mixes nearby pixel values, correlated noise is a form of 
convolution and it smooths the image.
+In terms of the number of exposures (and thus correlated noise), the XDF 
dataset is by no means an ordinary dataset.
+It is the result of warping and adding roughly 80 separate exposures which can 
create strong correlated noise/smoothing.
+In common surveys the number of exposures is usually 10 or less.
+
+Let's tweak NoiseChisel's configuration a little to get a better result on 
this dataset.
+Don't forget that ``@emph{Good statistical analysis is not a purely routine 
matter, and generally calls for more than one pass through the computer}'' 
(Anscombe 1973, see @ref{Science and its tools}).
+A good scientist must have a good understanding of her tools to make a 
meaningful analysis.
+So don't hesitate in playing with the default configuration and reviewing the 
manual when you have a new dataset in front of you.
+Robust data analysis is an art, therefore a good scientist must first be a 
good artist.
+
+NoiseChisel can produce ``Check images'' to help you visualize and inspect how 
each step is done.
+You can see all the check images it can produce with this command.
 
 @example
 $ astnoisechisel --help | grep check
 @end example
 
-Let's check the overall detection process to get a better feeling of what
-NoiseChisel is doing with the following command. To learn the details of
-NoiseChisel in more detail, please see
-@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}. Also
-see @ref{NoiseChisel changes after publication}.
+Let's check the overall detection process to get a better feeling of what 
NoiseChisel is doing with the following command.
+To learn the details of NoiseChisel in more detail, please see 
@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}.
+Also see @ref{NoiseChisel changes after publication}.
 
 @example
 $ astnoisechisel flat-ir/xdf-f160w.fits --checkdetection
 @end example
 
-The check images/tables are also multi-extension FITS files.  As you saw
-from the command above, when check datasets are requested, NoiseChisel
-won't go to the end. It will abort as soon as all the extensions of the
-check image are ready. Please list the extensions of the output with
-@command{astfits} and then opening it with @command{ds9} as we done
-above. If you have read the paper, you will see why there are so many
-extensions in the check image.
+The check images/tables are also multi-extension FITS files.
+As you saw from the command above, when check datasets are requested, 
NoiseChisel won't go to the end.
+It will abort as soon as all the extensions of the check image are ready.
+Please list the extensions of the output with @command{astfits} and then 
opening it with @command{ds9} as we done above.
+If you have read the paper, you will see why there are so many extensions in 
the check image.
 
 @example
 $ astfits xdf-f160w_detcheck.fits
 $ ds9 -mecube xdf-f160w_detcheck.fits -zscale -zoom to fit
 @end example
 
-In order to understand the parameters and their biases (especially as you
-are starting to use Gnuastro, or running it a new dataset), it is
-@emph{strongly} encouraged to play with the different parameters and use
-the respective check images to see which step is affected by your changes
-and how, for example see @ref{Detecting large extended targets}.
+In order to understand the parameters and their biases (especially as you are 
starting to use Gnuastro, or running it a new dataset), it is @emph{strongly} 
encouraged to play with the different parameters and use the respective check 
images to see which step is affected by your changes and how, for example see 
@ref{Detecting large extended targets}.
 
 @cindex FWHM
-The @code{OPENED_AND_LABELED} extension shows the initial detection step of
-NoiseChisel. We see these thin connections between smaller points are
-already present here (a relatively early stage in the processing). Such
-connections at the lowest surface brightness limits usually occur when the
-dataset is too smoothed. Because of correlated noise, the dataset is
-already artificially smoothed, therefore further smoothing it with the
-default kernel may be the problem. One solution is thus to use a sharper
-kernel (NoiseChisel's first step in its processing).
-
-By default NoiseChisel uses a Gaussian with full-width-half-maximum (FWHM)
-of 2 pixels. We can use Gnuastro's MakeProfiles to build a kernel with FWHM
-of 1.5 pixel (truncated at 5 times the FWHM, like the default) using the
-following command. MakeProfiles is a powerful tool to build any number of
-mock profiles on one image or independently, to learn more of its features
-and capabilities, see @ref{MakeProfiles}.
+The @code{OPENED_AND_LABELED} extension shows the initial detection step of 
NoiseChisel.
+We see these thin connections between smaller points are already present here 
(a relatively early stage in the processing).
+Such connections at the lowest surface brightness limits usually occur when 
the dataset is too smoothed.
+Because of correlated noise, the dataset is already artificially smoothed, 
therefore further smoothing it with the default kernel may be the problem.
+One solution is thus to use a sharper kernel (NoiseChisel's first step in its 
processing).
+
+By default NoiseChisel uses a Gaussian with full-width-half-maximum (FWHM) of 
2 pixels.
+We can use Gnuastro's MakeProfiles to build a kernel with FWHM of 1.5 pixel 
(truncated at 5 times the FWHM, like the default) using the following command.
+MakeProfiles is a powerful tool to build any number of mock profiles on one 
image or independently, to learn more of its features and capabilities, see 
@ref{MakeProfiles}.
 
 @example
 $ astmkprof --kernel=gaussian,1.5,5 --oversample=1
 @end example
 
 @noindent
-Please open the output @file{kernel.fits} and have a look (it is very small
-and sharp). We can now tell NoiseChisel to use this instead of the default
-kernel with the following command (we'll keep checking the detection steps)
+Please open the output @file{kernel.fits} and have a look (it is very small 
and sharp).
+We can now tell NoiseChisel to use this instead of the default kernel with the 
following command (we'll keep checking the detection steps)
 
 @example
 $ astnoisechisel flat-ir/xdf-f160w.fits --kernel=kernel.fits  \
                  --checkdetection
 @end example
 
-Looking at the @code{OPENED_AND_LABELED} extension, we see that the thin
-connections between smaller peaks has now significantly decreased. Going
-two extensions/steps ahead (in the first @code{HOLES-FILLED}), you can see
-that during the process of finding false pseudo-detections, too many holes
-have been filled: do you see how the many of the brighter galaxies are
-connected? At this stage all holes are filled, irrespective of their size.
+Looking at the @code{OPENED_AND_LABELED} extension, we see that the thin 
connections between smaller peaks has now significantly decreased.
+Going two extensions/steps ahead (in the first @code{HOLES-FILLED}), you can 
see that during the process of finding false pseudo-detections, too many holes 
have been filled: do you see how the many of the brighter galaxies are 
connected? At this stage all holes are filled, irrespective of their size.
 
-Try looking two extensions ahead (in the first @code{PSEUDOS-FOR-SN}), you
-can see that there aren't too many pseudo-detections because of all those
-extended filled holes. If you look closely, you can see the number of
-pseudo-detections in the result NoiseChisel prints (around 5000). This is
-another side-effect of correlated noise. To address it, we should slightly
-increase the pseudo-detection threshold (before changing
-@option{--dthresh}, run with @option{-P} to see the default value):
+Try looking two extensions ahead (in the first @code{PSEUDOS-FOR-SN}), you can 
see that there aren't too many pseudo-detections because of all those extended 
filled holes.
+If you look closely, you can see the number of pseudo-detections in the result 
NoiseChisel prints (around 5000).
+This is another side-effect of correlated noise.
+To address it, we should slightly increase the pseudo-detection threshold 
(before changing @option{--dthresh}, run with @option{-P} to see the default 
value):
 
 @example
 $ astnoisechisel flat-ir/xdf-f160w.fits --kernel=kernel.fits \
                  --dthresh=0.1 --checkdetection
 @end example
 
-Before visually inspecting the check image, you can already see the effect
-of this change in NoiseChisel's command-line output: notice how the number
-of pseudos has increased to more than 6000. Open the check image now and
-have a look, you can see how the pseudo-detections are distributed much
-more evenly in the image.
+Before visually inspecting the check image, you can already see the effect of 
this change in NoiseChisel's command-line output: notice how the number of 
pseudos has increased to more than 6000.
+Open the check image now and have a look, you can see how the 
pseudo-detections are distributed much more evenly in the image.
 
 @cartouche
 @noindent
-@strong{Maximize the number of pseudo-detecitons:} For a new noise-pattern
-(different instrument), play with @code{--dthresh} until you get a maximal
-number of pseudo-detections (the total number of pseudo-detections is
-printed on the command-line when you run NoiseChisel).
+@strong{Maximize the number of pseudo-detections:} For a new noise-pattern 
(different instrument), play with @code{--dthresh} until you get a maximal 
number of pseudo-detections (the total number of pseudo-detections is printed 
on the command-line when you run NoiseChisel).
 @end cartouche
 
-The signal-to-noise ratio of pseudo-detections define NoiseChisel's
-reference for removing false detections, so they are very important to get
-right. Let's have a look at their signal-to-noise distribution with
-@option{--checksn}.
+The signal-to-noise ratio of pseudo-detections define NoiseChisel's reference 
for removing false detections, so they are very important to get right.
+Let's have a look at their signal-to-noise distribution with 
@option{--checksn}.
 
 @example
 $ astnoisechisel flat-ir/xdf-f160w.fits --kernel=kernel.fits  \
                  --dthresh=0.1 --checkdetection --checksn
 @end example
 
-The output (@file{xdf-f160w_detsn.fits}) contains two extensions for the
-pseudo-detections over the undetected (sky) regions and those over
-detections. The first column is the pseudo-detection label which you can
-see in the respective@footnote{The first @code{PSEUDOS-FOR-SN} in
-@file{xdf-f160w_detsn.fits} is for the pseudo-detections over the
-undetected regions and the second is for those over detected regions.}
-@code{PSEUDOS-FOR-SN} extension of @file{xdf-f160w_detcheck.fits}. You can
-see the table columns with the first command below and get a feeling for
-its distribution with the second command (the two Table and Statistics
-programs will be discussed later in the tutorial)
+The output (@file{xdf-f160w_detsn.fits}) contains two extensions for the 
pseudo-detections over the undetected (sky) regions and those over detections.
+The first column is the pseudo-detection label which you can see in the 
respective@footnote{The first @code{PSEUDOS-FOR-SN} in 
@file{xdf-f160w_detsn.fits} is for the pseudo-detections over the undetected 
regions and the second is for those over detected regions.} 
@code{PSEUDOS-FOR-SN} extension of @file{xdf-f160w_detcheck.fits}.
+You can see the table columns with the first command below and get a feeling 
for its distribution with the second command (the two Table and Statistics 
programs will be discussed later in the tutorial)
 
 @example
 $ asttable xdf-f160w_detsn.fits -hSKY_PSEUDODET_SN
 $ aststatistics xdf-f160w_detsn.fits -hSKY_PSEUDODET_SN -c2
 @end example
 
-The correlated noise is again visible in this pseudo-detection
-signal-to-noise distribution: it is highly skewed. A small change in the
-quantile will translate into a big change in the S/N value. For example see
-the difference between the three 0.99, 0.95 and 0.90 quantiles with this
-command:
+The correlated noise is again visible in this pseudo-detection signal-to-noise 
distribution: it is highly skewed.
+A small change in the quantile will translate into a big change in the S/N 
value.
+For example see the difference between the three 0.99, 0.95 and 0.90 quantiles 
with this command:
 
 @example
 $ aststatistics xdf-f160w_detsn.fits -hSKY_PSEUDODET_SN -c2      \
                 --quantile=0.99 --quantile=0.95 --quantile=0.90
 @end example
 
-If you run NoiseChisel with @option{-P}, you'll see the default
-signal-to-noise quantile @option{--snquant} is 0.99. In effect with this
-option you specify the purity level you want (contamination by false
-detections). With the @command{aststatistics} command above, you see that a
-small number of extra false detections (impurity) in the final result
-causes a big change in completeness (you can detect more lower
-signal-to-noise true detections). So let's loosen-up our desired purity
-level, remove the check-image options, and then mask the detected pixels
-like before to see if we have missed anything.
+If you run NoiseChisel with @option{-P}, you'll see the default 
signal-to-noise quantile @option{--snquant} is 0.99.
+In effect with this option you specify the purity level you want 
(contamination by false detections).
+With the @command{aststatistics} command above, you see that a small number of 
extra false detections (impurity) in the final result causes a big change in 
completeness (you can detect more lower signal-to-noise true detections).
+So let's loosen-up our desired purity level, remove the check-image options, 
and then mask the detected pixels like before to see if we have missed anything.
 
 @example
 $ astnoisechisel flat-ir/xdf-f160w.fits --kernel=kernel.fits  \
@@ -3629,34 +2818,26 @@ $ det="xdf-f160w_detected.fits -hDETECTIONS"
 $ astarithmetic $in $det nan where --output=mask-det.fits
 @end example
 
-Overall it seems good, but if you play a little with the color-bar and look
-closer in the noise, you'll see a few very sharp, but faint, objects that
-have not been detected. This only happens for under-sampled datasets like
-HST (where the pixel size is larger than the point spread function
-FWHM). So this won't happen on ground-based images. Because of this, sharp
-and faint objects will be very small and eroded too easily during
-NoiseChisel's erosion step.
+Overall it seems good, but if you play a little with the color-bar and look 
closer in the noise, you'll see a few very sharp, but faint, objects that have 
not been detected.
+This only happens for under-sampled datasets like HST (where the pixel size is 
larger than the point spread function FWHM).
+So this won't happen on ground-based images.
+Because of this, sharp and faint objects will be very small and eroded too 
easily during NoiseChisel's erosion step.
 
-To address this problem of sharp objects, we can use NoiseChisel's
-@option{--noerodequant} option. All pixels above this quantile will not be
-eroded, thus allowing us to preserve faint and sharp objects. Check its
-default value, then run NoiseChisel like below and make the mask again. You
-will see many of those sharp objects are now detected.
+To address this problem of sharp objects, we can use NoiseChisel's 
@option{--noerodequant} option.
+All pixels above this quantile will not be eroded, thus allowing us to 
preserve faint and sharp objects.
+Check its default value, then run NoiseChisel like below and make the mask 
again.
+You will see many of those sharp objects are now detected.
 
 @example
 $ astnoisechisel flat-ir/xdf-f160w.fits --kernel=kernel.fits     \
                  --noerodequant=0.95 --dthresh=0.1 --snquant=0.95
 @end example
 
-This seems to be fine and we can continue with our analysis. To avoid
-having to write these options on every call to NoiseChisel, we'll just make
-a configuration file in a visible @file{config} directory. Then we'll
-define the hidden @file{.gnuastro} directory (that all Gnuastro's programs
-will look into for configuration files) as a symbolic link to the
-@file{config} directory. Finally, we'll write the finalized values of the
-options into NoiseChisel's standard configuration file within that
-directory. We'll also put the kernel in a separate directory to keep the
-top directory clean of any files we later need.
+This seems to be fine and we can continue with our analysis.
+To avoid having to write these options on every call to NoiseChisel, we'll 
just make a configuration file in a visible @file{config} directory.
+Then we'll define the hidden @file{.gnuastro} directory (that all Gnuastro's 
programs will look into for configuration files) as a symbolic link to the 
@file{config} directory.
+Finally, we'll write the finalized values of the options into NoiseChisel's 
standard configuration file within that directory.
+We'll also put the kernel in a separate directory to keep the top directory 
clean of any files we later need.
 
 @example
 $ mkdir kernel config
@@ -3682,93 +2863,68 @@ $ astnoisechisel flat-ir/xdf-f105w.fits 
--output=nc/xdf-f105w.fits
 @node NoiseChisel optimization for storage, Segmentation and making a catalog, 
NoiseChisel optimization for detection, General program usage tutorial
 @subsection NoiseChisel optimization for storage
 
-As we showed before (in @ref{Multiextension FITS files NoiseChisel's
-output}), NoiseChisel's output is a multi-extension FITS file with several
-images the same size as the input. As the input datasets get larger this
-output can become hard to manage and waste a lot of storage
-space. Fortunately there is a solution to this problem (which is also
-useful for Segment's outputs). But first, let's have a look at the volume
-of NoiseChisel's output from @ref{NoiseChisel optimization for detection}
-(fast answer, its larger than 100 mega-bytes):
+As we showed before (in @ref{Multiextension FITS files NoiseChisel's output}), 
NoiseChisel's output is a multi-extension FITS file with several images the 
same size as the input.
+As the input datasets get larger this output can become hard to manage and 
waste a lot of storage space.
+Fortunately there is a solution to this problem (which is also useful for 
Segment's outputs).
+But first, let's have a look at the volume of NoiseChisel's output from 
@ref{NoiseChisel optimization for detection} (fast answer, its larger than 100 
mega-bytes):
 
 @example
 $ ls -lh nc/xdf-f160w.fits
 @end example
 
-Two options can drastically decrease NoiseChisel's output file size: 1)
-With the @option{--rawoutput} option, NoiseChisel won't create a
-Sky-subtracted input. After all, it is redundant: you can always generate
-it by subtracting the Sky from the input image (which you have in your
-database) using the Arithmetic program. 2) With the
-@option{--oneelempertile}, you can tell NoiseChisel to store its Sky and
-Sky standard deviation results with one pixel per tile (instead of many
-pixels per tile).
+Two options can drastically decrease NoiseChisel's output file size: 1) With 
the @option{--rawoutput} option, NoiseChisel won't create a Sky-subtracted 
input.
+After all, it is redundant: you can always generate it by subtracting the Sky 
from the input image (which you have in your database) using the Arithmetic 
program.
+2) With the @option{--oneelempertile}, you can tell NoiseChisel to store its 
Sky and Sky standard deviation results with one pixel per tile (instead of many 
pixels per tile).
 
 @example
 $ astnoisechisel flat-ir/xdf-f160w.fits --oneelempertile --rawoutput
 @end example
 
 @noindent
-The output is now just under 8 mega byes! But you can even be more
-efficient in space by compressing it. Try the command below to see how
-NoiseChisel's output has now shrunk to about 250 kilobyes while keeping all
-the necessary information as the original 100 mega-byte output.
+The output is now just under 8 mega byes! But you can even be more efficient 
in space by compressing it.
+Try the command below to see how NoiseChisel's output has now shrunk to about 
250 kilo-byes while keeping all the necessary information as the original 100 
mega-byte output.
 
 @example
 $ gzip --best xdf-f160w_detected.fits
 $ ls -lh xdf-f160w_detected.fits.gz
 @end example
 
-We can get this wonderful level of compression because NoiseChisel's output
-is binary with only two values: 0 and 1. Compression algorithms are highly
-optimized in such scenarios.
+We can get this wonderful level of compression because NoiseChisel's output is 
binary with only two values: 0 and 1.
+Compression algorithms are highly optimized in such scenarios.
 
-You can open @file{xdf-f160w_detected.fits.gz} directly in SAO DS9 or feed
-it to any of Gnuastro's programs without having to uncompress
-it. Higher-level programs that take NoiseChisel's output can also deal with
-this compressed image where the Sky and its Standard deviation are one
-pixel-per-tile.
+You can open @file{xdf-f160w_detected.fits.gz} directly in SAO DS9 or feed it 
to any of Gnuastro's programs without having to uncompress it.
+Higher-level programs that take NoiseChisel's output can also deal with this 
compressed image where the Sky and its Standard deviation are one 
pixel-per-tile.
 
 
 
 @node Segmentation and making a catalog, Working with catalogs estimating 
colors, NoiseChisel optimization for storage, General program usage tutorial
 @subsection Segmentation and making a catalog
-The main output of NoiseChisel is the binary detection map
-(@code{DETECTIONS} extension, see @ref{NoiseChisel optimization for
-detection}). which only has two values of 1 or 0. This is useful when
-studying the noise, but hardly of any use when you actually want to study
-the targets/galaxies in the image, especially in such a deep field where
-the detection map of almost everything is connected. To find the galaxies
-over the detections, we'll use Gnuastro's @ref{Segment} program:
+The main output of NoiseChisel is the binary detection map (@code{DETECTIONS} 
extension, see @ref{NoiseChisel optimization for detection}).
+which only has two values of 1 or 0.
+This is useful when studying the noise, but hardly of any use when you 
actually want to study the targets/galaxies in the image, especially in such a 
deep field where the detection map of almost everything is connected.
+To find the galaxies over the detections, we'll use Gnuastro's @ref{Segment} 
program:
 
 @example
 $ mkdir seg
 $ astsegment nc/xdf-f160w.fits -oseg/xdf-f160w.fits
 @end example
 
-Segment's operation is very much like NoiseChisel (in fact, prior to
-version 0.6, it was part of NoiseChisel). For example the output is a
-multi-extension FITS file, it has check images and uses the undetected
-regions as a reference. Please have a look at Segment's multi-extension
-output with @command{ds9} to get a good feeling of what it has done.
+Segment's operation is very much like NoiseChisel (in fact, prior to version 
0.6, it was part of NoiseChisel).
+For example the output is a multi-extension FITS file, it has check images and 
uses the undetected regions as a reference.
+Please have a look at Segment's multi-extension output with @command{ds9} to 
get a good feeling of what it has done.
 
 @example
 $ ds9 -mecube seg/xdf-f160w.fits -zscale -zoom to fit
 @end example
 
-Like NoiseChisel, the first extension is the input. The @code{CLUMPS}
-extension shows the true ``clumps'' with values that are @mymath{\ge1}, and
-the diffuse regions labeled as @mymath{-1}. In the @code{OBJECTS}
-extension, we see that the large detections of NoiseChisel (that may have
-contained many galaxies) are now broken up into separate labels. see
-@ref{Segment} for more.
+Like NoiseChisel, the first extension is the input.
+The @code{CLUMPS} extension shows the true ``clumps'' with values that are 
@mymath{\ge1}, and the diffuse regions labeled as @mymath{-1}.
+In the @code{OBJECTS} extension, we see that the large detections of 
NoiseChisel (that may have contained many galaxies) are now broken up into 
separate labels.
+See @ref{Segment} for more.
 
-Having localized the regions of interest in the dataset, we are ready to do
-measurements on them with @ref{MakeCatalog}. Besides the IDs, we want to
-measure (in this order) the Right Ascension (with @option{--ra}),
-Declination (@option{--dec}), magnitude (@option{--magnitude}), and
-signal-to-noise ratio (@option{--sn}) of the objects and clumps. The
-following command will make these measurements on Segment's F160W output:
+Having localized the regions of interest in the dataset, we are ready to do 
measurements on them with @ref{MakeCatalog}.
+Besides the IDs, we want to measure (in this order) the Right Ascension (with 
@option{--ra}), Declination (@option{--dec}), magnitude (@option{--magnitude}), 
and signal-to-noise ratio (@option{--sn}) of the objects and clumps.
+The following command will make these measurements on Segment's F160W output:
 
 @c Keep the `--zeropoint' on a single line, because later, we'll add
 @c `--valuesfile' in that line also, and it would be more clear if both
@@ -3781,30 +2937,18 @@ $ astmkcatalog seg/xdf-f160w.fits --ids --ra --dec 
--magnitude --sn \
 @end example
 
 @noindent
-From the printed statements on the command-line, you see that MakeCatalog
-read all the extensions in Segment's output for the various measurements it
-needed.
+From the printed statements on the command-line, you see that MakeCatalog read 
all the extensions in Segment's output for the various measurements it needed.
 
-To calculate colors, we also need magnitude measurements on the F105W
-filter. However, the galaxy properties might differ between the filters
-(which is the whole purpose behind measuring colors). Also, the noise
-properties and depth of the datasets differ. Therefore, if we simply follow
-the same Segment and MakeCatalog calls above for the F105W filter, we are
-going to get a different number of objects and clumps. Matching the two
-catalogs is possible (for example with @ref{Match}), but the fact that the
-measurements will be done on different pixels, can bias the result. Since
-the Point spread function (PSF) of both images is very similar, an accurate
-color calculation can only be done when magnitudes are measured from the
-same pixels on both images.
+To calculate colors, we also need magnitude measurements on the F105W filter.
+However, the galaxy properties might differ between the filters (which is the 
whole purpose behind measuring colors).
+Also, the noise properties and depth of the datasets differ.
+Therefore, if we simply follow the same Segment and MakeCatalog calls above 
for the F105W filter, we are going to get a different number of objects and 
clumps.
+Matching the two catalogs is possible (for example with @ref{Match}), but the 
fact that the measurements will be done on different pixels, can bias the 
result.
+Since the Point spread function (PSF) of both images is very similar, an 
accurate color calculation can only be done when magnitudes are measured from 
the same pixels on both images.
 
-The F160W image is deeper, thus providing better detection/segmentation,
-and redder, thus observing smaller/older stars and representing more of the
-mass in the galaxies. To generate the F105W catalog, we will thus use the
-pixel labels generated on the F160W filter, but do the measurements on the
-F105W filter (using MakeCatalog's @option{--valuesfile} option). Notice how
-the only difference between this call to MakeCatalog and the previous one
-is @option{--valuesfile}, the value given to @code{--zeropoint} and the
-output name.
+The F160W image is deeper, thus providing better detection/segmentation, and 
redder, thus observing smaller/older stars and representing more of the mass in 
the galaxies.
+To generate the F105W catalog, we will thus use the pixel labels generated on 
the F160W filter, but do the measurements on the F105W filter (using 
MakeCatalog's @option{--valuesfile} option).
+Notice how the only difference between this call to MakeCatalog and the 
previous one is @option{--valuesfile}, the value given to @code{--zeropoint} 
and the output name.
 
 @example
 $ astmkcatalog seg/xdf-f160w.fits --ids --ra --dec --magnitude --sn \
@@ -3812,47 +2956,35 @@ $ astmkcatalog seg/xdf-f160w.fits --ids --ra --dec 
--magnitude --sn \
                --clumpscat --output=cat/xdf-f105w.fits
 @end example
 
-Look into what MakeCatalog printed on the command-line. You can see that
-(as requested) the object and clump labels were taken from the respective
-extensions in @file{seg/xdf-f160w.fits}, while the values and Sky standard
-deviation were done on @file{nc/xdf-f105w.fits}.
+Look into what MakeCatalog printed on the command-line.
+You can see that (as requested) the object and clump labels were taken from 
the respective extensions in @file{seg/xdf-f160w.fits}, while the values and 
Sky standard deviation were done on @file{nc/xdf-f105w.fits}.
 
-Since we used the same labeled image on both filters, the number of rows in
-both catalogs are the same. The clumps are not affected by the
-hard-to-deblend and low signal-to-noise diffuse regions, they are more
-robust for calculating the colors (compared to objects). Therefore from
-this step onward, we'll continue with clumps.
+Since we used the same labeled image on both filters, the number of rows in 
both catalogs are the same.
+The clumps are not affected by the hard-to-deblend and low signal-to-noise 
diffuse regions, they are more robust for calculating the colors (compared to 
objects).
+Therefore from this step onward, we'll continue with clumps.
 
-Finally, the comments in MakeCatalog's output (@code{COMMENT} keywords in
-the FITS headers, or lines starting with @code{#} in plain text) contain
-some important information about the input datasets and other useful info
-(for example pixel area or per-pixel surface brightness limit). You can see
-them with this command:
+Finally, the comments in MakeCatalog's output (@code{COMMENT} keywords in the 
FITS headers, or lines starting with @code{#} in plain text) contain some 
important information about the input datasets and other useful info (for 
example pixel area or per-pixel surface brightness limit).
+You can see them with this command:
 
 @example
 $ astfits cat/xdf-f160w.fits -h1 | grep COMMENT
 @end example
 
 
-@node Working with catalogs estimating colors, Aperture photomery, 
Segmentation and making a catalog, General program usage tutorial
+@node Working with catalogs estimating colors, Aperture photometry, 
Segmentation and making a catalog, General program usage tutorial
 @subsection Working with catalogs (estimating colors)
-The output of the MakeCatalog command above is a FITS table (see
-@ref{Segmentation and making a catalog}). The two clump and object catalogs
-are available in the two extensions of the single FITS
-file@footnote{MakeCatalog can also output plain text tables. However, in
-the plain text format you can only have one table per file. Therefore, if
-you also request measurements on clumps, two plain text tables will be
-created (suffixed with @file{_o.txt} and @file{_c.txt}).}. Let's see the
-extensions and their basic properties with the Fits program:
+The output of the MakeCatalog command above is a FITS table (see 
@ref{Segmentation and making a catalog}).
+The two clump and object catalogs are available in the two extensions of the 
single FITS file@footnote{MakeCatalog can also output plain text tables.
+However, in the plain text format you can only have one table per file.
+Therefore, if you also request measurements on clumps, two plain text tables 
will be created (suffixed with @file{_o.txt} and @file{_c.txt}).}.
+Let's see the extensions and their basic properties with the Fits program:
 
 @example
 $ astfits  cat/xdf-f160w.fits              # Extension information
 @end example
 
-Now, let's inspect the table in each extension with Gnuastro's Table
-program (see @ref{Table}). Note that we could have used @option{-hOBJECTS}
-and @option{-hCLUMPS} instead of @option{-h1} and @option{-h2}
-respectively.
+Now, let's inspect the table in each extension with Gnuastro's Table program 
(see @ref{Table}).
+Note that we could have used @option{-hOBJECTS} and @option{-hCLUMPS} instead 
of @option{-h1} and @option{-h2} respectively.
 
 @example
 $ asttable cat/xdf-f160w.fits -h1 --info   # Objects catalog info.
@@ -3861,16 +2993,11 @@ $ asttable cat/xdf-f160w.fits -h2 -i       # Clumps 
catalog info.
 $ asttable cat/xdf-f160w.fits -h2          # Clumps catalog columns.
 @end example
 
-As you see above, when given a specific table (file name and extension),
-Table will print the full contents of all the columns. To see the basic
-metadata about each column (for example name, units and comments), simply
-append a @option{--info} (or @option{-i}) to the command.
+As you see above, when given a specific table (file name and extension), Table 
will print the full contents of all the columns.
+To see the basic metadata about each column (for example name, units and 
comments), simply append a @option{--info} (or @option{-i}) to the command.
 
-To print the contents of special column(s), just specify the column
-number(s) (counting from @code{1}) or the column name(s) (if they have
-one). For example, if you just want the magnitude and signal-to-noise ratio
-of the clumps (in @option{-h2}), you can get it with any of the following
-commands
+To print the contents of special column(s), just specify the column number(s) 
(counting from @code{1}) or the column name(s) (if they have one).
+For example, if you just want the magnitude and signal-to-noise ratio of the 
clumps (in @option{-h2}), you can get it with any of the following commands
 
 @example
 $ asttable cat/xdf-f160w.fits -h2 -c5,6
@@ -3879,41 +3006,25 @@ $ asttable cat/xdf-f160w.fits -h2 -c5         -c6
 $ asttable cat/xdf-f160w.fits -h2 -cMAGNITUDE -cSN
 @end example
 
-Using column names instead of numbers has many advantages: 1) you don't
-have to worry about the order of columns in the table. 2) It acts as a
-documentation in the script. Column meta-data (including a name) aren't
-just limited to FITS tables and can also be used in plain text tables, see
-@ref{Gnuastro text table format}.
-
-We can finally calculate the colors of the objects from these two
-datasets. If you inspect the contents of the two catalogs, you'll notice
-that because they were both derived from the same segmentation maps, the
-rows are ordered identically (they correspond to the same object/clump in
-both filters). But to be generic (usable even when the rows aren't ordered
-similarly) and display another useful program in Gnuastro, we'll use
-@ref{Match}.
-
-As the name suggests, Gnuastro's Match program will match rows based on
-distance (or aperture in 2D) in one (or two) columns. In the command below,
-the options relating to each catalog are placed under it for easy
-understanding. You give Match two catalogs (from the two different filters
-we derived above) as argument, and the HDUs containing them (if they are
-FITS files) with the @option{--hdu} and @option{--hdu2} options. The
-@option{--ccol1} and @option{--ccol2} options specify the
-coordinate-columns which should be matched with which in the two
-catalogs. With @option{--aperture} you specify the acceptable error (radius
-in 2D), in the same units as the columns (see below for why we have
-requested an aperture of 0.35 arcseconds, or less than 6 HST pixels).
-
-The @option{--outcols} of Match is a very convenient feature in Match: you
-can use it to specify which columns from the two catalogs you want in the
-output (merge two input catalogs into one). If the first character is an
-`@key{a}', the respective matched column (number or name, similar to Table
-above) in the first catalog will be written in the output table. When the
-first character is a `@key{b}', the respective column from the second
-catalog will be written in the output. Also, if the first character is
-followed by @code{_all}, then all the columns from the respective catalog
-will be put in the output.
+Using column names instead of numbers has many advantages:
+1) you don't have to worry about the order of columns in the table.
+2) It acts as a documentation in the script.
+Column meta-data (including a name) aren't just limited to FITS tables and can 
also be used in plain text tables, see @ref{Gnuastro text table format}.
+
+We can finally calculate the colors of the objects from these two datasets.
+If you inspect the contents of the two catalogs, you'll notice that because 
they were both derived from the same segmentation maps, the rows are ordered 
identically (they correspond to the same object/clump in both filters).
+But to be generic (usable even when the rows aren't ordered similarly) and 
display another useful program in Gnuastro, we'll use @ref{Match}.
+
+As the name suggests, Gnuastro's Match program will match rows based on 
distance (or aperture in 2D) in one (or two) columns.
+In the command below, the options relating to each catalog are placed under it 
for easy understanding.
+You give Match two catalogs (from the two different filters we derived above) 
as argument, and the HDUs containing them (if they are FITS files) with the 
@option{--hdu} and @option{--hdu2} options.
+The @option{--ccol1} and @option{--ccol2} options specify the 
coordinate-columns which should be matched with which in the two catalogs.
+With @option{--aperture} you specify the acceptable error (radius in 2D), in 
the same units as the columns (see below for why we have requested an aperture 
of 0.35 arcseconds, or less than 6 HST pixels).
+
+The @option{--outcols} of Match is a very convenient feature in Match: you can 
use it to specify which columns from the two catalogs you want in the output 
(merge two input catalogs into one).
+If the first character is an `@key{a}', the respective matched column (number 
or name, similar to Table above) in the first catalog will be written in the 
output table.
+When the first character is a `@key{b}', the respective column from the second 
catalog will be written in the output.
+Also, if the first character is followed by @code{_all}, then all the columns 
from the respective catalog will be put in the output.
 
 @example
 $ astmatch cat/xdf-f160w.fits           cat/xdf-f105w.fits         \
@@ -3930,15 +3041,12 @@ Let's have a look at the columns in the matched catalog:
 $ asttable cat/xdf-f160w-f105w.fits -i
 @end example
 
-Indeed, its exactly the columns we wanted: there are two @code{MAGNITUDE}
-and @code{SN} columns. The first is from the F160W filter, the second is
-from the F105W. Right now, you know this. But in one hour, you'll start
-doubting your self: going through your command history, trying to answer
-this question: ``which magnitude corresponds to which filter?''. You should
-never torture your future-self (or colleagues) like this! So, let's rename
-these confusing columns in the matched catalog. The FITS standard for
-tables stores the column names in the @code{TTYPE} header keywords, so
-let's have a look:
+Indeed, its exactly the columns we wanted: there are two @code{MAGNITUDE} and 
@code{SN} columns.
+The first is from the F160W filter, the second is from the F105W.
+Right now, you know this.
+But in one hour, you'll start doubting your self: going through your command 
history, trying to answer this question: ``which magnitude corresponds to which 
filter?''.
+You should never torture your future-self (or colleagues) like this! So, let's 
rename these confusing columns in the matched catalog.
+The FITS standard for tables stores the column names in the @code{TTYPE} 
header keywords, so let's have a look:
 
 @example
 $ astfits cat/xdf-f160w-f105w.fits -h1 | grep TTYPE
@@ -3955,86 +3063,48 @@ $ astfits cat/xdf-f160w-f105w.fits -h1                  
        \
 $ asttable cat/xdf-f160w-f105w.fits -i
 @end example
 
-If you noticed, when running Match, we also asked for a log file
-(@option{--log}). Many Gnuastro programs have this option to provide some
-detailed information on their operation in case you are curious or want to
-debug something. Here, we are using it to justify the value we gave to
-@option{--aperture}. Even though you asked for the output to be written in
-the @file{cat} directory, a listing of the contents of your current
-directory will show you an extra @file{astmatch.fits} file. Let's have a
-look at what columns it contains.
+If you noticed, when running Match, we also asked for a log file 
(@option{--log}).
+Many Gnuastro programs have this option to provide some detailed information 
on their operation in case you are curious or want to debug something.
+Here, we are using it to justify the value we gave to @option{--aperture}.
+Even though you asked for the output to be written in the @file{cat} 
directory, a listing of the contents of your current directory will show you an 
extra @file{astmatch.fits} file.
+Let's have a look at what columns it contains.
 
 @example
 $ ls
-$ asttable astmatch.log -i
+$ asttable astmatch.fits -i
 @end example
 
-@c********************************
-@c We'll merge them into one table using the @command{paste} program
-@c on the command-line. But, we only want the magnitude from the F105W
-@c dataset, so we'll only pull out the @code{MAGNITUDE} and @code{SN}
-@c column. The output of @command{paste} will have each line of both catalogs
-@c merged into a single line.
-
-@c @example
-@c $ asttable cat/xdf-f160w.fits -h2                > xdf-f160w.txt
-@c $ asttable cat/xdf-f105w.fits -h2 -cMAGNITUDE,SN > xdf-f105w.txt
-@c $ paste xdf-f160w.txt xdf-f105w.txt              > xdf-f160w-f105w.txt
-@c @end example
-
-@c Open @file{xdf-f160w-f105w.txt} to see how @command{paste} has operated.
-@c ********************************
-
 @cindex Flux-weighted
 @cindex SED, Spectral Energy Distribution
 @cindex Spectral Energy Distribution, SED
-The @file{MATCH_DIST} column contains the distance of the matched rows,
-let's have a look at the distribution of values in this column. You might
-be asking yourself ``why should the positions of the two filters differ
-when I gave MakeCatalog the same segmentation map?'' The reason is that the
-central positions are @emph{flux-weighted}. Therefore the
-@option{--valuesfile} dataset you give to MakeCatalog will also affect the
-center measurements@footnote{To only measure the center based on the
-labeled pixels (and ignore the pixel values), you can ask for the columns
-that contain @option{geo} (for geometric) in them. For example
-@option{--geow1} or @option{--geow2} for the RA and Declination (first and
-second world-coordinates).}. Recall that the Spectral Energy Distribution
-(SED) of galaxies is not flat and they have substructure, therefore, they
-can have different shapes/morphologies in different filters.
-
-Gnuastro has a simple program for basic statistical analysis. The command
-below will print some basic information about the distribution (minimum,
-maximum, median and etc), along with a cute little ASCII histogram to
-visually help you understand the distribution on the command-line without
-the need for a graphic user interface. This ASCII histogram can be useful
-when you just want some coarse and general information on the input
-dataset. It is also useful when working on a server (where you may not have
-graphic user interface), and finally, its fast.
+The @file{MATCH_DIST} column contains the distance of the matched rows, let's 
have a look at the distribution of values in this column.
+You might be asking yourself ``why should the positions of the two filters 
differ when I gave MakeCatalog the same segmentation map?'' The reason is that 
the central positions are @emph{flux-weighted}.
+Therefore the @option{--valuesfile} dataset you give to MakeCatalog will also 
affect the center measurements@footnote{To only measure the center based on the 
labeled pixels (and ignore the pixel values), you can ask for the columns that 
contain @option{geo} (for geometric) in them.
+For example @option{--geow1} or @option{--geow2} for the RA and Declination 
(first and second world-coordinates).}.
+Recall that the Spectral Energy Distribution (SED) of galaxies is not flat and 
they have substructure, therefore, they can have different shapes/morphologies 
in different filters.
+
+Gnuastro has a simple program for basic statistical analysis.
+The command below will print some basic information about the distribution 
(minimum, maximum, median, etc), along with a cute little ASCII histogram to 
visually help you understand the distribution on the command-line without the 
need for a graphic user interface.
+This ASCII histogram can be useful when you just want some coarse and general 
information on the input dataset.
+It is also useful when working on a server (where you may not have graphic 
user interface), and finally, its fast.
 
 @example
 $ aststatistics astmatch.fits -cMATCH_DIST
 $ rm astmatch.fits
 @end example
 
-The units of this column are the same as the columns you gave to Match: in
-degrees. You see that while almost all the objects matched very nicely, the
-maximum distance is roughly 0.31 arcseconds. This is why we asked for an
-aperture of 0.35 arcseconds when doing the match.
+The units of this column are the same as the columns you gave to Match: in 
degrees.
+You see that while almost all the objects matched very nicely, the maximum 
distance is roughly 0.31 arcseconds.
+This is why we asked for an aperture of 0.35 arcseconds when doing the match.
 
-Gnuastro's Table program can also be used to measure the colors using the
-command below. As before, the @option{-c1,2} option will tell Table to
-print the first two columns. With the @option{--range=SN_F160W,7,inf} we
-only keep the rows that have a F160W signal-to-noise ratio larger than
-7@footnote{The value of 7 is taken from the clump S/N threshold in F160W
-(where the clumps were defined).}.
+Gnuastro's Table program can also be used to measure the colors using the 
command below.
+As before, the @option{-c1,2} option will tell Table to print the first two 
columns.
+With the @option{--range=SN_F160W,7,inf} we only keep the rows that have a 
F160W signal-to-noise ratio larger than 7@footnote{The value of 7 is taken from 
the clump S/N threshold in F160W (where the clumps were defined).}.
 
-Finally, for estimating the colors, we use Table's column arithmetic
-feature. It uses the same notation as the Arithmetic program (see
-@ref{Reverse polish notation}), with almost all the same operators (see
-@ref{Arithmetic operators}). You can use column arithmetic in any output
-column, just put the value in double quotations and start the value with
-@code{arith} (followed by a space) like below. In column-arithmetic, you
-can identify columns by number or name, see @ref{Column arithmetic}.
+Finally, for estimating the colors, we use Table's column arithmetic feature.
+It uses the same notation as the Arithmetic program (see @ref{Reverse polish 
notation}), with almost all the same operators (see @ref{Arithmetic operators}).
+You can use column arithmetic in any output column, just put the value in 
double quotations and start the value with @code{arith} (followed by a space) 
like below.
+In column-arithmetic, you can identify columns by number or name, see 
@ref{Column arithmetic}.
 
 @example
 $ asttable cat/xdf-f160w-f105w.fits -ocat/f105w-f160w.fits \
@@ -4043,20 +3113,16 @@ $ asttable cat/xdf-f160w-f105w.fits 
-ocat/f105w-f160w.fits \
 @end example
 
 @noindent
-You can inspect the distribution of colors with the Statistics program. But
-first, let's give the color column a proper name.
+You can inspect the distribution of colors with the Statistics program.
+But first, let's give the color column a proper name.
 
 @example
 $ astfits cat/f105w-f160w.fits --update=TTYPE5,COLOR_F105W_F160W
 $ aststatistics cat/f105w-f160w.fits -cCOLOR_F105W_F160W
 @end example
 
-You can later use Gnuastro's Statistics program with the
-@option{--histogram} option to build a much more fine-grained histogram as
-a table to feed into your favorite plotting program for a much more
-accurate/appealing plot (for example with PGFPlots in @LaTeX{}). If you
-just want a specific measure, for example the mean, median and standard
-deviation, you can ask for them specifically with this command:
+You can later use Gnuastro's Statistics program with the @option{--histogram} 
option to build a much more fine-grained histogram as a table to feed into your 
favorite plotting program for a much more accurate/appealing plot (for example 
with PGFPlots in @LaTeX{}).
+If you just want a specific measure, for example the mean, median and standard 
deviation, you can ask for them specifically with this command:
 
 @example
 $ aststatistics cat/f105w-f160w.fits -cCOLOR_F105W_F160W \
@@ -4064,22 +3130,17 @@ $ aststatistics cat/f105w-f160w.fits 
-cCOLOR_F105W_F160W \
 @end example
 
 
-@node Aperture photomery, Finding reddest clumps and visual inspection, 
Working with catalogs estimating colors, General program usage tutorial
-@subsection Aperture photomery
-Some researchers prefer to have colors in a fixed aperture for all the
-objects. The colors we calculated in @ref{Working with catalogs estimating
-colors} used a different segmentation map for each object. This might not
-satisfy some science cases. To make a catalog from fixed apertures, we
-should make a labeled image which has a fixed label for each aperture. That
-labeled image can be given to MakeCatalog instead of Segment's labeled
-detection image.
+@node Aperture photometry, Finding reddest clumps and visual inspection, 
Working with catalogs estimating colors, General program usage tutorial
+@subsection Aperture photometry
+Some researchers prefer to have colors in a fixed aperture for all the objects.
+The colors we calculated in @ref{Working with catalogs estimating colors} used 
a different segmentation map for each object.
+This might not satisfy some science cases.
+To make a catalog from fixed apertures, we should make a labeled image which 
has a fixed label for each aperture.
+That labeled image can be given to MakeCatalog instead of Segment's labeled 
detection image.
 
 @cindex GNU AWK
-To generate the apertures catalog we'll use Gnuastro's MakeProfiles (see
-@ref{MakeProfiles}). We'll first read the clump positions from the F160W
-catalog, then use AWK to set the other parameters of each profile to be a
-fixed circle of radius 5 pixels (recall that we want all apertures to be
-identical in this scenario).
+To generate the apertures catalog we'll use Gnuastro's MakeProfiles (see 
@ref{MakeProfiles}).
+We'll first read the clump positions from the F160W catalog, then use AWK to 
set the other parameters of each profile to be a fixed circle of radius 5 
pixels (recall that we want all apertures to be identical in this scenario).
 
 @example
 $ rm *.fits *.txt
@@ -4088,15 +3149,10 @@ $ asttable cat/xdf-f160w.fits -hCLUMPS -cRA,DEC         
           \
            > apertures.txt
 @end example
 
-We can now feed this catalog into MakeProfiles using the command below to
-build the apertures over the image. The most important option for this
-particular job is @option{--mforflatpix}, it tells MakeProfiles that the
-values in the magnitude column should be used for each pixel of a flat
-profile. Without it, MakeProfiles would build the profiles such that the
-@emph{sum} of the pixels of each profile would have a @emph{magnitude} (in
-log-scale) of the value given in that column (what you would expect when
-simulating a galaxy for example). See @ref{Invoking astmkprof} for details
-on the options.
+We can now feed this catalog into MakeProfiles using the command below to 
build the apertures over the image.
+The most important option for this particular job is @option{--mforflatpix}, 
it tells MakeProfiles that the values in the magnitude column should be used 
for each pixel of a flat profile.
+Without it, MakeProfiles would build the profiles such that the @emph{sum} of 
the pixels of each profile would have a @emph{magnitude} (in log-scale) of the 
value given in that column (what you would expect when simulating a galaxy for 
example).
+See @ref{Invoking astmkprof} for details on the options.
 
 @example
 $ astmkprof apertures.txt --background=flat-ir/xdf-f160w.fits     \
@@ -4104,25 +3160,17 @@ $ astmkprof apertures.txt 
--background=flat-ir/xdf-f160w.fits     \
             --mode=wcs
 @end example
 
-The first thing you might notice in the printed information is that the
-profiles are not built in order. This is because MakeProfiles works in
-parallel, and parallel CPU operations are asynchronous. You can try running
-MakeProfiles with one thread (using @option{--numthreads=1}) to see how
-order is respected in that case.
+The first thing you might notice in the printed information is that the 
profiles are not built in order.
+This is because MakeProfiles works in parallel, and parallel CPU operations 
are asynchronous.
+You can try running MakeProfiles with one thread (using 
@option{--numthreads=1}) to see how order is respected in that case.
 
-Open the output @file{apertures.fits} file and see the result. Where the
-apertures overlap, you will notice that one label has replaced the other
-(because of the @option{--replace} option). In the future, MakeCatalog will
-be able to work with overlapping labels, but currently it doesn't. If you
-are interested, please join us in completing Gnuastro with added
-improvements like this (see task 14750
-@footnote{@url{https://savannah.gnu.org/task/index.php?14750}}).
+Open the output @file{apertures.fits} file and see the result.
+Where the apertures overlap, you will notice that one label has replaced the 
other (because of the @option{--replace} option).
+In the future, MakeCatalog will be able to work with overlapping labels, but 
currently it doesn't.
+If you are interested, please join us in completing Gnuastro with added 
improvements like this (see task 14750 
@footnote{@url{https://savannah.gnu.org/task/index.php?14750}}).
 
-We can now feed the @file{apertures.fits} labeled image into MakeCatalog
-instead of Segment's output as shown below. In comparison with the previous
-MakeCatalog call, you will notice that there is no more
-@option{--clumpscat} option, since each aperture is treated as a separate
-``object'' here.
+We can now feed the @file{apertures.fits} labeled image into MakeCatalog 
instead of Segment's output as shown below.
+In comparison with the previous MakeCatalog call, you will notice that there 
is no more @option{--clumpscat} option, since each aperture is treated as a 
separate ``object'' here.
 
 @example
 $ astmkcatalog apertures.fits -h1 --zeropoint=26.27        \
@@ -4131,36 +3179,28 @@ $ astmkcatalog apertures.fits -h1 --zeropoint=26.27     
   \
                --output=cat/xdf-f105w-aper.fits
 @end example
 
-This catalog has the same number of rows as the catalog produced from
-clumps in @ref{Working with catalogs estimating colors}. Therefore similar
-to how we found colors, you can compare the aperture and clump magnitudes
-for example.
+This catalog has the same number of rows as the catalog produced from clumps 
in @ref{Working with catalogs estimating colors}.
+Therefore similar to how we found colors, you can compare the aperture and 
clump magnitudes for example.
 
 You can also change the filter name and zeropoint magnitudes and run this
 command again to have the fixed aperture magnitude in the F160W filter and
 measure colors on apertures.
 
 
-@node Finding reddest clumps and visual inspection, Citing and acknowledging 
Gnuastro, Aperture photomery, General program usage tutorial
+@node Finding reddest clumps and visual inspection, Citing and acknowledging 
Gnuastro, Aperture photometry, General program usage tutorial
 @subsection Finding reddest clumps and visual inspection
 @cindex GNU AWK
-As a final step, let's go back to the original clumps-based color
-measurement we generated in @ref{Working with catalogs estimating
-colors}. We'll find the objects with the strongest color and make a cutout
-to inspect them visually and finally, we'll see how they are located on the
-image. With the command below, we'll select the reddest objects (those with
-a color larger than 1.5):
+As a final step, let's go back to the original clumps-based color measurement 
we generated in @ref{Working with catalogs estimating colors}.
+We'll find the objects with the strongest color and make a cutout to inspect 
them visually and finally, we'll see how they are located on the image.
+With the command below, we'll select the reddest objects (those with a color 
larger than 1.5):
 
 @example
 $ asttable cat/f105w-f160w.fits --range=COLOR_F105W_F160W,1.5,inf
 @end example
 
-We want to crop the F160W image around each of these objects, but we need a
-unique identifier for them first. We'll define this identifier using the
-object and clump labels (with an underscore between them) and feed the
-output of the command above to AWK to generate a catalog. Note that since
-we are making a plain text table, we'll define the column metadata manually
-(see @ref{Gnuastro text table format}).
+We want to crop the F160W image around each of these objects, but we need a 
unique identifier for them first.
+We'll define this identifier using the object and clump labels (with an 
underscore between them) and feed the output of the command above to AWK to 
generate a catalog.
+Note that since we are making a plain text table, we'll define the column 
metadata manually (see @ref{Gnuastro text table format}).
 
 @example
 $ echo "# Column 1: ID [name, str10] Object ID" > reddest.txt
@@ -4169,12 +3209,10 @@ $ asttable cat/f105w-f160w.fits 
--range=COLOR_F105W_F160W,1.5,inf \
            >> reddest.txt
 @end example
 
-We can now feed @file{reddest.txt} into Gnuastro's Crop program to see what
-these objects look like. To keep things clean, we'll make a directory
-called @file{crop-red} and ask Crop to save the crops in this
-directory. We'll also add a @file{-f160w.fits} suffix to the crops (to
-remind us which image they came from). The width of the crops will be 15
-arcseconds.
+We can now feed @file{reddest.txt} into Gnuastro's Crop program to see what 
these objects look like.
+To keep things clean, we'll make a directory called @file{crop-red} and ask 
Crop to save the crops in this directory.
+We'll also add a @file{-f160w.fits} suffix to the crops (to remind us which 
image they came from).
+The width of the crops will be 15 arcseconds.
 
 @example
 $ mkdir crop-red
@@ -4183,16 +3221,12 @@ $ astcrop flat-ir/xdf-f160w.fits --mode=wcs 
--namecol=ID \
           --suffix=-f160w.fits --output=crop-red
 @end example
 
-You can see all the cropped FITS files in the @file{crop-red}
-directory. Like the MakeProfiles command in @ref{Aperture photomery}, you
-might notice that the crops aren't made in order. This is because each crop
-is independent of the rest, therefore crops are done in parallel, and
-parallel operations are asynchronous. In the command above, you can change
-@file{f160w} to @file{f105w} to make the crops in both filters.
+You can see all the cropped FITS files in the @file{crop-red} directory.
+Like the MakeProfiles command in @ref{Aperture photometry}, you might notice 
that the crops aren't made in order.
+This is because each crop is independent of the rest, therefore crops are done 
in parallel, and parallel operations are asynchronous.
+In the command above, you can change @file{f160w} to @file{f105w} to make the 
crops in both filters.
 
-To view the crops more easily (not having to open ds9 for each image), you
-can convert the FITS crops into the JPEG format with a shell loop like
-below.
+To view the crops more easily (not having to open ds9 for each image), you can 
convert the FITS crops into the JPEG format with a shell loop like below.
 
 @example
 $ cd crop-red
@@ -4202,18 +3236,14 @@ $ for f in *.fits; do                                   
               \
 $ cd ..
 @end example
 
-You can now use your general graphic user interface image viewer to flip
-through the images more easily, or import them into your papers/reports.
+You can now use your general graphic user interface image viewer to flip 
through the images more easily, or import them into your papers/reports.
 
 @cindex GNU Parallel
-The @code{for} loop above to convert the images will do the job in series:
-each file is converted only after the previous one is complete. If you have
-@url{https://www.gnu.org/s/parallel, GNU Parallel}, you can greatly speed
-up this conversion. GNU Parallel will run the separate commands
-simultaneously on different CPU threads in parallel. For more information
-on efficiently using your threads, see @ref{Multi-threaded
-operations}. Here is a replacement for the shell @code{for} loop above
-using GNU Parallel.
+The @code{for} loop above to convert the images will do the job in series: 
each file is converted only after the previous one is complete.
+If you have @url{https://www.gnu.org/s/parallel, GNU Parallel}, you can 
greatly speed up this conversion.
+GNU Parallel will run the separate commands simultaneously on different CPU 
threads in parallel.
+For more information on efficiently using your threads, see 
@ref{Multi-threaded operations}.
+Here is a replacement for the shell @code{for} loop above using GNU Parallel.
 
 @example
 $ cd crop-red
@@ -4224,10 +3254,10 @@ $ cd ..
 
 @cindex DS9
 @cindex SAO DS9
-As the final action, let's see how these objects are positioned over the
-dataset. DS9 has the ``Region''s concept for this purpose. You just have to
-convert your catalog into a ``region file'' to feed into DS9. To do that,
-you can use AWK again as shown below.
+As the final action, let's see how these objects are positioned over the 
dataset.
+DS9 has the ``Region''s concept for this purpose.
+You just have to convert your catalog into a ``region file'' to feed into DS9.
+To do that, you can use AWK again as shown below.
 
 @example
 $ awk 'BEGIN@{print "# Region file format: DS9 version 4.1";      \
@@ -4237,10 +3267,8 @@ $ awk 'BEGIN@{print "# Region file format: DS9 version 
4.1";      \
       reddest.txt > reddest.reg
 @end example
 
-This region file can be loaded into DS9 with its @option{-regions} option
-to display over any image (that has world coordinate system). In the
-example below, we'll open Segment's output and load the regions over all
-the extensions (to see the image and the respective clump):
+This region file can be loaded into DS9 with its @option{-regions} option to 
display over any image (that has world coordinate system).
+In the example below, we'll open Segment's output and load the regions over 
all the extensions (to see the image and the respective clump):
 
 @example
 $ ds9 -mecube seg/xdf-f160w.fits -zscale -zoom to fit    \
@@ -4250,14 +3278,10 @@ $ ds9 -mecube seg/xdf-f160w.fits -zscale -zoom to fit   
 \
 
 @node Citing and acknowledging Gnuastro,  , Finding reddest clumps and visual 
inspection, General program usage tutorial
 @subsection Citing and acknowledging Gnuastro
-In conclusion, we hope this extended tutorial has been a good starting
-point to help in your exciting research. If this book or any of the
-programs in Gnuastro have been useful for your research, please cite the
-respective papers, and acknowledge the funding agencies that made all of
-this possible. All Gnuastro programs have a @option{--cite} option to
-facilitate the citation and acknowledgment. Just note that it may be
-necessary to cite additional papers for different programs, so please try
-it out on all the programs that you used, for example:
+In conclusion, we hope this extended tutorial has been a good starting point 
to help in your exciting research.
+If this book or any of the programs in Gnuastro have been useful for your 
research, please cite the respective papers, and acknowledge the funding 
agencies that made all of this possible.
+All Gnuastro programs have a @option{--cite} option to facilitate the citation 
and acknowledgment.
+Just note that it may be necessary to cite additional papers for different 
programs, so please try it out on all the programs that you used, for example:
 
 @example
 $ astmkcatalog --cite
@@ -4274,67 +3298,44 @@ $ astnoisechisel --cite
 @node Detecting large extended targets,  , General program usage tutorial, 
Tutorials
 @section Detecting large extended targets
 
-The outer wings of large and extended objects can sink into the noise very
-gradually and can have a large variety of shapes (for example due to tidal
-interactions). Therefore separating the outer boundaries of the galaxies
-from the noise can be particularly tricky. Besides causing an
-under-estimation in the total estimated brightness of the target, failure
-to detect such faint wings will also cause a bias in the noise
-measurements, thereby hampering the accuracy of any measurement on the
-dataset. Therefore even if they don't constitute a significant fraction of
-the target's light, or aren't your primary target, these regions must not
-be ignored. In this tutorial, we'll walk you through the strategy of
-detecting such targets using @ref{NoiseChisel}.
+The outer wings of large and extended objects can sink into the noise very 
gradually and can have a large variety of shapes (for example due to tidal 
interactions).
+Therefore separating the outer boundaries of the galaxies from the noise can 
be particularly tricky.
+Besides causing an under-estimation in the total estimated brightness of the 
target, failure to detect such faint wings will also cause a bias in the noise 
measurements, thereby hampering the accuracy of any measurement on the dataset.
+Therefore even if they don't constitute a significant fraction of the target's 
light, or aren't your primary target, these regions must not be ignored.
+In this tutorial, we'll walk you through the strategy of detecting such 
targets using @ref{NoiseChisel}.
 
 @cartouche
 @noindent
-@strong{Don't start with this tutorial:} If you haven't already completed
-@ref{General program usage tutorial}, we strongly recommend going through
-that tutorial before starting this one. Basic features like access to this
-book on the command-line, the configuration files of Gnuastro's programs,
-benefiting from the modular nature of the programs, viewing multi-extension
-FITS files, or using NoiseChisel's outputs are discussed in more detail
-there.
+@strong{Don't start with this tutorial:} If you haven't already completed 
@ref{General program usage tutorial}, we strongly recommend going through that 
tutorial before starting this one.
+Basic features like access to this book on the command-line, the configuration 
files of Gnuastro's programs, benefiting from the modular nature of the 
programs, viewing multi-extension FITS files, or using NoiseChisel's outputs 
are discussed in more detail there.
 @end cartouche
 
 @cindex M51
 @cindex NGC5195
 @cindex SDSS, Sloan Digital Sky Survey
 @cindex Sloan Digital Sky Survey, SDSS
-We'll try to detect the faint tidal wings of the beautiful M51
-group@footnote{@url{https://en.wikipedia.org/wiki/M51_Group}} in this
-tutorial. We'll use a dataset/image from the public
-@url{http://www.sdss.org/, Sloan Digital Sky Survey}, or SDSS. Due to its
-more peculiar low surface brightness structure/features, we'll focus on the
-dwarf companion galaxy of the group (or NGC 5195). To get the image, you
-can use SDSS's @url{https://dr12.sdss.org/fields, Simple field search}
-tool. As long as it is covered by the SDSS, you can find an image
-containing your desired target either by providing a standard name (if it
-has one), or its coordinates. To access the dataset we will use here, write
-@code{NGC5195} in the ``Object Name'' field and press ``Submit'' button.
+We'll try to detect the faint tidal wings of the beautiful M51 
group@footnote{@url{https://en.wikipedia.org/wiki/M51_Group}} in this tutorial.
+We'll use a dataset/image from the public @url{http://www.sdss.org/, Sloan 
Digital Sky Survey}, or SDSS.
+Due to its more peculiar low surface brightness structure/features, we'll 
focus on the dwarf companion galaxy of the group (or NGC 5195).
+To get the image, you can use SDSS's @url{https://dr12.sdss.org/fields, Simple 
field search} tool.
+As long as it is covered by the SDSS, you can find an image containing your 
desired target either by providing a standard name (if it has one), or its 
coordinates.
+To access the dataset we will use here, write @code{NGC5195} in the ``Object 
Name'' field and press ``Submit'' button.
 
 @cartouche
 @noindent
-@strong{Type the example commands:} Try to type the example commands on
-your terminal and use the history feature of your command-line (by pressing
-the ``up'' button to retrieve previous commands). Don't simply copy and
-paste the commands shown here. This will help simulate future situations
-when you are processing your own datasets.
+@strong{Type the example commands:} Try to type the example commands on your 
terminal and use the history feature of your command-line (by pressing the 
``up'' button to retrieve previous commands).
+Don't simply copy and paste the commands shown here.
+This will help simulate future situations when you are processing your own 
datasets.
 @end cartouche
 
 @cindex GNU Wget
-You can see the list of available filters under the color image. For this
-demonstration, we'll use the r-band filter image.  By clicking on the
-``r-band FITS'' link, you can download the image. Alternatively, you can
-just run the following command to download it with GNU Wget@footnote{To
-make the command easier to view on screen or in a page, we have defined the
-top URL of the image as the @code{topurl} shell variable. You can just
-replace the value of this variable with @code{$topurl} in the
-@command{wget} command.}. To keep things clean, let's also put it in a
-directory called @file{ngc5195}. With the @option{-O} option, we are asking
-Wget to save the downloaded file with a more manageable name:
-@file{r.fits.bz2} (this is an r-band image of NGC 5195, which was the
-directory name).
+You can see the list of available filters under the color image.
+For this demonstration, we'll use the r-band filter image.
+By clicking on the ``r-band FITS'' link, you can download the image.
+Alternatively, you can just run the following command to download it with GNU 
Wget@footnote{To make the command easier to view on screen or in a page, we 
have defined the top URL of the image as the @code{topurl} shell variable.
+You can just replace the value of this variable with @code{$topurl} in the 
@command{wget} command.}.
+To keep things clean, let's also put it in a directory called @file{ngc5195}.
+With the @option{-O} option, we are asking Wget to save the downloaded file 
with a more manageable name: @file{r.fits.bz2} (this is an r-band image of NGC 
5195, which was the directory name).
 
 @example
 $ mkdir ngc5195
@@ -4345,14 +3346,11 @@ $ wget 
$topurl/301/3716/6/frame-r-003716-6-0117.fits.bz2 -Or.fits.bz2
 
 @cindex Bzip2
 @noindent
-This server keeps the files in a Bzip2 compressed file format. So we'll
-first decompress it with the following command. By convention, compression
-programs delete the original file (compressed when uncompressing, or
-uncompressed when compressing). To keep the original file, you can use the
-@option{--keep} or @option{-k} option which is available in most
-compression programs for this job. Here, we don't need the compressed file
-any more, so we'll just let @command{bunzip} delete it for us and keep the
-directory clean.
+This server keeps the files in a Bzip2 compressed file format.
+So we'll first decompress it with the following command.
+By convention, compression programs delete the original file (compressed when 
uncompressing, or uncompressed when compressing).
+To keep the original file, you can use the @option{--keep} or @option{-k} 
option which is available in most compression programs for this job.
+Here, we don't need the compressed file any more, so we'll just let 
@command{bunzip} delete it for us and keep the directory clean.
 
 @example
 $ bunzip2 r.fits.bz2
@@ -4366,193 +3364,129 @@ $ bunzip2 r.fits.bz2
 
 @node NoiseChisel optimization, Achieved surface brightness level, Detecting 
large extended targets, Detecting large extended targets
 @subsection NoiseChisel optimization
-In @ref{Detecting large extended targets} we downladed the single exposure
-SDSS image. Let's see how NoiseChisel operates on it with its default
-parameters:
+In @ref{Detecting large extended targets} we downloaded the single exposure 
SDSS image.
+Let's see how NoiseChisel operates on it with its default parameters:
 
 @example
 $ astnoisechisel r.fits -h0
 @end example
 
-As described in @ref{Multiextension FITS files NoiseChisel's output},
-NoiseChisel's default output is a multi-extension FITS file. Open the
-output @file{r_detected.fits} file and have a look at the extensions, the
-first extension is only meta-data and contains NoiseChisel's configuration
-parameters. The rest are the Sky-subtracted input, the detection map, Sky
-values and Sky standard deviation.
+As described in @ref{Multiextension FITS files NoiseChisel's output}, 
NoiseChisel's default output is a multi-extension FITS file.
+Open the output @file{r_detected.fits} file and have a look at the extensions, 
the first extension is only meta-data and contains NoiseChisel's configuration 
parameters.
+The rest are the Sky-subtracted input, the detection map, Sky values and Sky 
standard deviation.
 
 @example
 $ ds9 -mecube r_detected.fits -zscale -zoom to fit
 @end example
 
-Flipping through the extensions in a FITS viewer, you will see that the
-first image (Sky-subtracted image) looks reasonable: there are no major
-artifacts due to bad Sky subtraction compared to the input. The second
-extension also seems reasonable with a large detection map that covers the
-whole of NGC5195, but also extends beyond towards the bottom of the
-image.
+Flipping through the extensions in a FITS viewer, you will see that the first 
image (Sky-subtracted image) looks reasonable: there are no major artifacts due 
to bad Sky subtraction compared to the input.
+The second extension also seems reasonable with a large detection map that 
covers the whole of NGC5195, but also extends beyond towards the bottom of the 
image.
+
+Now try flipping between the @code{DETECTIONS} and @code{SKY} extensions.
+In the @code{SKY} extension, you'll notice that there is still significant 
signal beyond the detected pixels.
+You can tell that this signal belongs to the galaxy because the far-right side 
of the image is dark and the brighter tiles are surrounding the detected pixels.
+
+The fact that signal from the galaxy remains in the Sky dataset shows that you 
haven't done a good detection.
+The @code{SKY} extension must not contain any light around the galaxy.
+Generally, any time your target is much larger than the tile size and the 
signal is almost flat (like this case), this @emph{will} happen.
+Therefore, when there are large objects in the dataset, @strong{the best 
place} to check the accuracy of your detection is the estimated Sky image.
 
-Now try fliping between the @code{DETECTIONS} and @code{SKY} extensions.
-In the @code{SKY} extension, you'll notice that there is still significant
-signal beyond the detected pixels. You can tell that this signal belongs to
-the galaxy because the far-right side of the image is dark and the brighter
-tiles are surrounding the detected pixels.
-
-The fact that signal from the galaxy remains in the Sky dataset shows that
-you haven't done a good detection. The @code{SKY} extension must not
-contain any light around the galaxy. Generally, any time your target is
-much larger than the tile size and the signal is almost flat (like this
-case), this @emph{will} happen. Therefore, when there are large objects in
-the dataset, @strong{the best place} to check the accuracy of your
-detection is the estimated Sky image.
-
-When dominated by the background, noise has a symmetric
-distribution. However, signal is not symmetric (we don't have negative
-signal). Therefore when non-constant signal is present in a noisy dataset,
-the distribution will be positively skewed. This skewness is a good measure
-of how much signal we have in the distribution. The skewness can be
-accurately measured by the difference in the mean and median: assuming no
-strong outliers, the more distant they are, the more skewed the dataset
-is. For more see @ref{Quantifying signal in a tile}.
-
-However, skewness is only a proxy for signal when the signal has structure
-(varies per pixel). Therefore, when it is approximately constant over a
-whole tile, or sub-set of the image, the signal's effect is just to shift
-the symmetric center of the noise distribution to the positive and there
-won't be any skewness (major difference between the mean and median). This
-positive@footnote{In processed images, where the Sky value can be
-over-estimated, this constant shift can be negative.} shift that preserves
-the symmetric distribution is the Sky value. When there is a gradient over
-the dataset, different tiles will have different constant
-shifts/Sky-values, for example see Figure 11 of
-@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}.
-
-To get less scatter in measuring the mean and median (and thus better
-estimate the skewness), you will need a larger tile. So let's play with the
-tessellation a little to see how it affects the result. In Gnuastro, you
-can see the option values (@option{--tilesize} in this case) by adding the
-@option{-P} option to your last command. Try running NoiseChisel with
-@option{-P} to see its default tile size.
-
-You can clearly see that the default tile size is indeed much smaller than
-this (huge) galaxy and its tidal features. As a result, NoiseChisel was
-unable to identify the skewness within the tiles under the outer parts of
-M51 and NGC 5159 and the threshold has been over-estimated on those
-tiles. To see which tiles were used for estimating the quantile threshold
-(no skewness was measured), you can use NoiseChisel's
-@option{--checkqthresh} option:
+When dominated by the background, noise has a symmetric distribution.
+However, signal is not symmetric (we don't have negative signal).
+Therefore when non-constant signal is present in a noisy dataset, the 
distribution will be positively skewed.
+This skewness is a good measure of how much signal we have in the distribution.
+The skewness can be accurately measured by the difference in the mean and 
median: assuming no strong outliers, the more distant they are, the more skewed 
the dataset is.
+For more see @ref{Quantifying signal in a tile}.
+
+However, skewness is only a proxy for signal when the signal has structure 
(varies per pixel).
+Therefore, when it is approximately constant over a whole tile, or sub-set of 
the image, the signal's effect is just to shift the symmetric center of the 
noise distribution to the positive and there won't be any skewness (major 
difference between the mean and median).
+This positive@footnote{In processed images, where the Sky value can be 
over-estimated, this constant shift can be negative.} shift that preserves the 
symmetric distribution is the Sky value.
+When there is a gradient over the dataset, different tiles will have different 
constant shifts/Sky-values, for example see Figure 11 of 
@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}.
+
+To get less scatter in measuring the mean and median (and thus better estimate 
the skewness), you will need a larger tile.
+So let's play with the tessellation a little to see how it affects the result.
+In Gnuastro, you can see the option values (@option{--tilesize} in this case) 
by adding the @option{-P} option to your last command.
+Try running NoiseChisel with @option{-P} to see its default tile size.
+
+You can clearly see that the default tile size is indeed much smaller than 
this (huge) galaxy and its tidal features.
+As a result, NoiseChisel was unable to identify the skewness within the tiles 
under the outer parts of M51 and NGC 5159 and the threshold has been 
over-estimated on those tiles.
+To see which tiles were used for estimating the quantile threshold (no 
skewness was measured), you can use NoiseChisel's @option{--checkqthresh} 
option:
 
 @example
 $ astnoisechisel r.fits -h0 --checkqthresh
 @end example
 
-Notice how this option doesn't allow NoiseChisel to finish. NoiseChisel
-aborted after finding and applying the quantile thresholds. When you call
-any of NoiseChisel's @option{--check*} options, by default, it will abort
-as soon as all the check steps have been written in the check file (a
-multi-extension FITS file). This allows you to focus on the problem you
-wanted to check as soon as possible (you can disable this feature with the
-@option{--continueaftercheck} option).
-
-To optimize the threshold-related settings for this image, let's playing
-with this quantile threshold check image a little. Don't forget that
-``@emph{Good statistical analysis is not a purely routine matter, and
-generally calls for more than one pass through the computer}'' (Anscombe
-1973, see @ref{Science and its tools}). A good scientist must have a good
-understanding of her tools to make a meaningful analysis. So don't hesitate
-in playing with the default configuration and reviewing the manual when you
-have a new dataset in front of you. Robust data analysis is an art,
-therefore a good scientist must first be a good artist.
-
-The first extension of @file{r_qthresh.fits} (@code{CONVOLVED}) is the
-convolved input image where the threshold(s) is(are) defined and
-applied. For more on the effect of convolution and thresholding, see
-Sections 3.1.1 and 3.1.2 of @url{https://arxiv.org/abs/1505.01664, Akhlaghi
-and Ichikawa [2015]}. The second extension (@code{QTHRESH_ERODE}) has a
-blank value for all the pixels of any tile that was identified as having
-significant signal. The next two extensions (@code{QTHRESH_NOERODE} and
-@code{QTHRESH_EXPAND}) are the other two quantile thresholds that are
-necessary in NoiseChisel's later steps. Every step in this file is repeated
-on the three thresholds.
-
-Play a little with the color bar of the @code{QTHRESH_ERODE} extension, you
-clearly see how the non-blank tiles around NGC 5195 have a gradient. As one
-line of attack against discarding too much signal below the threshold,
-NoiseChisel rejects outlier tiles. Go forward by three extensions to
-@code{VALUE1_NO_OUTLIER} and you will see that many of the tiles over the
-galaxy have been removed in this step. For more on the outlier rejection
-algorithm, see the latter half of @ref{Quantifying signal in a tile}.
-
-However, the default outlier rejection parameters weren't enough, and when
-you play with the color-bar, you still see a strong gradient around the
-outer tidal feature of the galaxy. You have two strategies for fixing this
-problem: 1) Increase the tile size to get more accurate measurements of
-skewness. 2) Strengthen the outlier rejection parameters to discard more of
-the tiles with signal. Fortunately in this image we have a sufficiently
-large region on the right of the image that the galaxy doesn't extend
-to. So we can use the more robust first solution. In situations where this
-doesn't happen (for example if the field of view in this image was shifted
-to have more of M51 and less sky) you are limited to a combination of the
-two solutions or just to the second solution.
+Notice how this option doesn't allow NoiseChisel to finish.
+NoiseChisel aborted after finding and applying the quantile thresholds.
+When you call any of NoiseChisel's @option{--check*} options, by default, it 
will abort as soon as all the check steps have been written in the check file 
(a multi-extension FITS file).
+This allows you to focus on the problem you wanted to check as soon as 
possible (you can disable this feature with the @option{--continueaftercheck} 
option).
+
+To optimize the threshold-related settings for this image, let's playing with 
this quantile threshold check image a little.
+Don't forget that ``@emph{Good statistical analysis is not a purely routine 
matter, and generally calls for more than one pass through the computer}'' 
(Anscombe 1973, see @ref{Science and its tools}).
+A good scientist must have a good understanding of her tools to make a 
meaningful analysis.
+So don't hesitate in playing with the default configuration and reviewing the 
manual when you have a new dataset in front of you.
+Robust data analysis is an art, therefore a good scientist must first be a 
good artist.
+
+The first extension of @file{r_qthresh.fits} (@code{CONVOLVED}) is the 
convolved input image where the threshold(s) is(are) defined and applied.
+For more on the effect of convolution and thresholding, see Sections 3.1.1 and 
3.1.2 of @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}.
+The second extension (@code{QTHRESH_ERODE}) has a blank value for all the 
pixels of any tile that was identified as having significant signal.
+The next two extensions (@code{QTHRESH_NOERODE} and @code{QTHRESH_EXPAND}) are 
the other two quantile thresholds that are necessary in NoiseChisel's later 
steps.
+Every step in this file is repeated on the three thresholds.
+
+Play a little with the color bar of the @code{QTHRESH_ERODE} extension, you 
clearly see how the non-blank tiles around NGC 5195 have a gradient.
+As one line of attack against discarding too much signal below the threshold, 
NoiseChisel rejects outlier tiles.
+Go forward by three extensions to @code{VALUE1_NO_OUTLIER} and you will see 
that many of the tiles over the galaxy have been removed in this step.
+For more on the outlier rejection algorithm, see the latter half of 
@ref{Quantifying signal in a tile}.
+
+However, the default outlier rejection parameters weren't enough, and when you 
play with the color-bar, you still see a strong gradient around the outer tidal 
feature of the galaxy.
+You have two strategies for fixing this problem: 1) Increase the tile size to 
get more accurate measurements of skewness.
+2) Strengthen the outlier rejection parameters to discard more of the tiles 
with signal.
+Fortunately in this image we have a sufficiently large region on the right of 
the image that the galaxy doesn't extend to.
+So we can use the more robust first solution.
+In situations where this doesn't happen (for example if the field of view in 
this image was shifted to have more of M51 and less sky) you are limited to a 
combination of the two solutions or just to the second solution.
 
 @cartouche
 @noindent
-@strong{Skipping convolution for faster tests:} The slowest step of
-NoiseChisel is the convolution of the input dataset. Therefore when your
-dataset is large (unlike the one in this test), and you are not changing
-the input dataset or kernel in multiple runs (as in the tests of this
-tutorial), it is faster to do the convolution separately once (using
-@ref{Convolve}) and use NoiseChisel's @option{--convolved} option to
-directly feed the convolved image and avoid convolution. For more on
-@option{--convolved}, see @ref{NoiseChisel input}.
+@strong{Skipping convolution for faster tests:} The slowest step of 
NoiseChisel is the convolution of the input dataset.
+Therefore when your dataset is large (unlike the one in this test), and you 
are not changing the input dataset or kernel in multiple runs (as in the tests 
of this tutorial), it is faster to do the convolution separately once (using 
@ref{Convolve}) and use NoiseChisel's @option{--convolved} option to directly 
feed the convolved image and avoid convolution.
+For more on @option{--convolved}, see @ref{NoiseChisel input}.
 @end cartouche
 
-To identify the skewness caused by the flat NGC 5195 and M51 tidal features
-on the tiles under it, we thus have to choose a tile size that is larger
-than the gradient of the signal. Let's try a tile size of 75 by 75 pixels:
+To identify the skewness caused by the flat NGC 5195 and M51 tidal features on 
the tiles under it, we thus have to choose a tile size that is larger than the 
gradient of the signal.
+Let's try a tile size of 75 by 75 pixels:
 
 @example
 $ astnoisechisel r.fits -h0 --tilesize=75,75 --checkqthresh
 @end example
 
-You can clearly see the effect of this increased tile size: the tiles are
-much larger and when you look into @code{VALUE1_NO_OUTLIER}, you see that
-almost all the previous tiles under the galaxy have been discarded and we
-only have a few tiles on the edge with a gradient. So let's define a smore
-strict condition to keep tiles:
+You can clearly see the effect of this increased tile size: the tiles are much 
larger and when you look into @code{VALUE1_NO_OUTLIER}, you see that almost all 
the previous tiles under the galaxy have been discarded and we only have a few 
tiles on the edge with a gradient.
+So let's define a more strict condition to keep tiles:
 
 @example
 $ astnoisechisel r.fits -h0 --tilesize=75,75 --meanmedqdiff=0.001 \
                  --checkqthresh
 @end example
 
-After constraining @code{--meanmedqdiff}, NoiseChisel stopped with a
-different error. Please read it: at the start, it says that only 6 tiles
-passed the constraint while you have asked for 9. The @file{r_qthresh.fits}
-image also only has 8 extensions (not the original 15). Take a look at the
-initially selected tiles and those after outlier rejection. You can see the
-place of the tiles that passed. They seem to be in the good place (very far
-away from the M51 group and its tidal feature. Using the 6 nearest
-neighbors is also not too bad. So let's decrease the number of neighboring
-tiles for interpolation so NoiseChisel can continue:
+After constraining @code{--meanmedqdiff}, NoiseChisel stopped with a different 
error.
+Please read it: at the start, it says that only 6 tiles passed the constraint 
while you have asked for 9.
+The @file{r_qthresh.fits} image also only has 8 extensions (not the original 
15).
+Take a look at the initially selected tiles and those after outlier rejection.
+You can see the place of the tiles that passed.
+They seem to be in the good place (very far away from the M51 group and its 
tidal feature.
+Using the 6 nearest neighbors is also not too bad.
+So let's decrease the number of neighboring tiles for interpolation so 
NoiseChisel can continue:
 
 @example
 $ astnoisechisel r.fits -h0 --tilesize=75,75 --meanmedqdiff=0.001 \
                  --interpnumngb=6 --checkqthresh
 @end example
 
-The next group of extensions (those ending with @code{_INTERP}), give a
-value to all blank tiles based on the nearest tiles with a measurement. The
-following group of extensions (ending with @code{_SMOOTH}) have smoothed
-the interpolated image to avoid sharp cuts on tile edges. Inspecting
-@code{THRESH1_SMOOTH}, you can see that there is no longer any significant
-gradient and no major signature of NGC 5195 exists.
+The next group of extensions (those ending with @code{_INTERP}), give a value 
to all blank tiles based on the nearest tiles with a measurement.
+The following group of extensions (ending with @code{_SMOOTH}) have smoothed 
the interpolated image to avoid sharp cuts on tile edges.
+Inspecting @code{THRESH1_SMOOTH}, you can see that there is no longer any 
significant gradient and no major signature of NGC 5195 exists.
 
-We can now remove @option{--checkqthresh} and let NoiseChisel proceed with
-its detection. Also, similar to the argument in @ref{NoiseChisel
-optimization for detection}, in the command above, we set the
-pseudo-detection signal-to-noise ratio quantile (@option{--snquant}) to
-0.95.
+We can now remove @option{--checkqthresh} and let NoiseChisel proceed with its 
detection.
+Also, similar to the argument in @ref{NoiseChisel optimization for detection}, 
in the command above, we set the pseudo-detection signal-to-noise ratio 
quantile (@option{--snquant}) to 0.95.
 
 @example
 $ rm r_qthresh.fits
@@ -4560,40 +3494,27 @@ $ astnoisechisel r.fits -h0 --tilesize=75,75 
--meanmedqdiff=0.001 \
                  --interpnumngb=6 --snquant=0.95
 @end example
 
-Looking at the @code{DETECTIONS} extension of NoiseChisel's output, we see
-the right-ward edges in particular have many holes that are fully
-surrounded by signal and the signal stretches out in the noise very thinly
-(the size of the holes increases as we go out). This suggests that there is
-still signal that can be detected. You can confirm this guess by looking at
-the @code{SKY} extension to see that indeed, there is a clear footprint of
-the M51 group in the Sky image (which is not good!). Therefore, we should
-dig deeper into the noise.
+Looking at the @code{DETECTIONS} extension of NoiseChisel's output, we see the 
right-ward edges in particular have many holes that are fully surrounded by 
signal and the signal stretches out in the noise very thinly (the size of the 
holes increases as we go out).
+This suggests that there is still signal that can be detected.
+You can confirm this guess by looking at the @code{SKY} extension to see that 
indeed, there is a clear footprint of the M51 group in the Sky image (which is 
not good!).
+Therefore, we should dig deeper into the noise.
 
-With the @option{--detgrowquant} option, NoiseChisel will use the
-detections as seeds and grow them in to the noise. Its value is the
-ultimate limit of the growth in units of quantile (between 0 and
-1). Therefore @option{--detgrowquant=1} means no growth and
-@option{--detgrowquant=0.5} means an ultimate limit of the Sky level (which
-is usually too much!). Try running the previous command with various values
-(from 0.6 to higher values) to see this option's effect. For this
-particularly huge galaxy (with signal that extends very gradually into the
-noise), we'll set it to @option{0.65}:
+With the @option{--detgrowquant} option, NoiseChisel will use the detections 
as seeds and grow them in to the noise.
+Its value is the ultimate limit of the growth in units of quantile (between 0 
and 1).
+Therefore @option{--detgrowquant=1} means no growth and 
@option{--detgrowquant=0.5} means an ultimate limit of the Sky level (which is 
usually too much!).
+Try running the previous command with various values (from 0.6 to higher 
values) to see this option's effect.
+For this particularly huge galaxy (with signal that extends very gradually 
into the noise), we'll set it to @option{0.65}:
 
 @example
 $ astnoisechisel r.fits -h0 --tilesize=75,75 --meanmedqdiff=0.001 \
                  --interpnumngb=6 --snquant=0.95 --detgrowquant=0.65
 @end example
 
-Beyond this level (smaller @option{--detgrowquant} values), you see the
-smaller background galaxies starting to create thin spider-leg-like
-features, showing that we are following correlated noise for too much.
+Beyond this level (smaller @option{--detgrowquant} values), you see the 
smaller background galaxies starting to create thin spider-leg-like features, 
showing that we are following correlated noise for too much.
 
-Now, when you look at the @code{DETECTIONS} extension, you see the wings of
-the galaxy being detected much farther out, But you also see many holes
-which are clearly just caused by noise. After growing the objects,
-NoiseChisel also allows you to fill such holes when they are smaller than a
-certain size through the @option{--detgrowmaxholesize} option. In this
-case, a maximum area/size of 10,000 pixels seems to be good:
+Now, when you look at the @code{DETECTIONS} extension, you see the wings of 
the galaxy being detected much farther out, But you also see many holes which 
are clearly just caused by noise.
+After growing the objects, NoiseChisel also allows you to fill such holes when 
they are smaller than a certain size through the @option{--detgrowmaxholesize} 
option.
+In this case, a maximum area/size of 10,000 pixels seems to be good:
 
 @example
 $ astnoisechisel r.fits -h0 --tilesize=75,75 --meanmedqdiff=0.001    \
@@ -4601,17 +3522,13 @@ $ astnoisechisel r.fits -h0 --tilesize=75,75 
--meanmedqdiff=0.001    \
                  --detgrowmaxholesize=10000
 @end example
 
-The detection looks good now, but when you look in to the @code{SKY}
-extension, you still clearly still see a footprint of the galaxy. We'll
-leave it as an exercise for you to play with NoiseChisel further and
-improve the detected pixels.
+The detection looks good now, but when you look in to the @code{SKY} 
extension, you still clearly still see a footprint of the galaxy.
+We'll leave it as an exercise for you to play with NoiseChisel further and 
improve the detected pixels.
 
-So, we'll just stop with one last tool NoiseChisel gives you to get a
-slightly better estimation of the Sky: @option{--minskyfrac}. On each tile,
-NoiseChisel will only measure the Sky-level if the fraction of undetected
-pixels is larger than the value given to this option. To avoid the edges of
-the galaxy, we'll set it to @option{0.9}. Therefore, tiles that are covered
-by detected pixels for more than @mymath{10\%} of their area are ignored.
+So, we'll just stop with one last tool NoiseChisel gives you to get a slightly 
better estimation of the Sky: @option{--minskyfrac}.
+On each tile, NoiseChisel will only measure the Sky-level if the fraction of 
undetected pixels is larger than the value given to this option.
+To avoid the edges of the galaxy, we'll set it to @option{0.9}.
+Therefore, tiles that are covered by detected pixels for more than 
@mymath{10\%} of their area are ignored.
 
 @example
 $ astnoisechisel r.fits -h0 --tilesize=75,75 --meanmedqdiff=0.001    \
@@ -4619,20 +3536,16 @@ $ astnoisechisel r.fits -h0 --tilesize=75,75 
--meanmedqdiff=0.001    \
                  --detgrowmaxholesize=10000 --minskyfrac=0.9
 @end example
 
-The footprint of the galaxy still exists in the @code{SKY} extension, but
-it has decreased in significance now. Let's calculate the significance of
-the undetected gradient, in units of noise. Since the gradient is roughly
-along the horizontal axis, we'll collapse the image along the second
-(vertical) FITS dimension to have a 1D array (a table column, see its
-values with the second command).
+The footprint of the galaxy still exists in the @code{SKY} extension, but it 
has decreased in significance now.
+Let's calculate the significance of the undetected gradient, in units of noise.
+Since the gradient is roughly along the horizontal axis, we'll collapse the 
image along the second (vertical) FITS dimension to have a 1D array (a table 
column, see its values with the second command).
 
 @example
 $ astarithmetic r_detected.fits 2 collapse-mean -hSKY -ocollapsed.fits
 $ asttable collapsed.fits
 @end example
 
-We can now calculate the minimum and maximum values of this array and
-define their difference (in units of noise) as the gradient:
+We can now calculate the minimum and maximum values of this array and define 
their difference (in units of noise) as the gradient:
 
 @example
 $ grad=$(astarithmetic r_detected.fits 2 collapse-mean set-i   \
@@ -4643,89 +3556,57 @@ $ echo $std
 $ astarithmetic -q $grad $std /
 @end example
 
-The undetected gradient (@code{grad} above) is thus roughly a quarter of
-the noise. But don't forget that this is per-pixel: individually its small,
-but it extends over millions of pixels, so the total flux may still be
-relevant.
-
-When looking at the raw input shallow image, you don't see anything so far
-out of the galaxy. You might just think that ``this is all noise, I have
-just dug too deep and I'm following systematics''! If you feel like this,
-have a look at the deep images of this system in
-@url{https://arxiv.org/abs/1501.04599, Watkins et al. [2015]}, or a 12 hour
-deep image of this system (with a 12-inch telescope):
-@url{https://i.redd.it/jfqgpqg0hfk11.jpg}@footnote{The image is taken from
-this Reddit discussion:
-@url{https://www.reddit.com/r/Astronomy/comments/9d6x0q/12_hours_of_exposure_on_the_whirlpool_galaxy/}}.
 In
-these deepr images you see that the outer edges of the M51 group clearly
-follow this exact structure, below in @ref{Achieved surface brightness
-level}, we'll measure the exact level.
-
-As the gradient in the @code{SKY} extension shows, and the deep images
-cited above confirm, the galaxy's signal extends even beyond this. But this
-is already far deeper than what most (if not all) other tools can detect.
-Therefore, we'll stop configuring NoiseChisel at this point in the tutorial
-and let you play with it a little more while reading more about it in
-@ref{NoiseChisel}.
-
-After finishing this tutorial please go through the NoiseChisel paper and
-its options and play with them to further decrease the gradient. This will
-greatly help you get a good feeling of the options. When you do find a
-better configuration, please send it to us and we'll mention your name here
-with your suggested configuration. Don't forget that good data analysis is
-an art, so like a sculptor, master your chisel for a good result.
+The undetected gradient (@code{grad} above) is thus roughly a quarter of the 
noise.
+But don't forget that this is per-pixel: individually its small, but it 
extends over millions of pixels, so the total flux may still be relevant.
+
+When looking at the raw input shallow image, you don't see anything so far out 
of the galaxy.
+You might just think that ``this is all noise, I have just dug too deep and 
I'm following systematics''! If you feel like this, have a look at the deep 
images of this system in @url{https://arxiv.org/abs/1501.04599, Watkins et al. 
[2015]}, or a 12 hour deep image of this system (with a 12-inch telescope): 
@url{https://i.redd.it/jfqgpqg0hfk11.jpg}@footnote{The image is taken from this 
Reddit discussion: 
@url{https://www.reddit.com/r/Astronomy/comments/9d6x0q/12_hours_of_exposure_on_the_wh
 [...]
+In these deeper images you see that the outer edges of the M51 group clearly 
follow this exact structure, below in @ref{Achieved surface brightness level}, 
we'll measure the exact level.
+
+As the gradient in the @code{SKY} extension shows, and the deep images cited 
above confirm, the galaxy's signal extends even beyond this.
+But this is already far deeper than what most (if not all) other tools can 
detect.
+Therefore, we'll stop configuring NoiseChisel at this point in the tutorial 
and let you play with it a little more while reading more about it in 
@ref{NoiseChisel}.
+
+After finishing this tutorial please go through the NoiseChisel paper and its 
options and play with them to further decrease the gradient.
+This will greatly help you get a good feeling of the options.
+When you do find a better configuration, please send it to us and we'll 
mention your name here with your suggested configuration.
+Don't forget that good data analysis is an art, so like a sculptor, master 
your chisel for a good result.
 
 @cartouche
 @noindent
-@strong{This NoiseChisel configuration is NOT GENERIC:} Don't use this
-configuration blindly on another image. As you saw above, the reason we
-chose this particular configuration for NoiseChisel to detect the wings of
-the M51 group was strongly influenced by the noise properties of this
-particular image. So as long as your image noise has similar properties
-(from the same data-reduction step of the same database), you can use this
-configuration on any image. For images from other instruments, or
-higher-level/reduced SDSS products, please follow a similar logic to what
-was presented here and find the best configuation yourself.
+@strong{This NoiseChisel configuration is NOT GENERIC:} Don't use this 
configuration blindly on another image.
+As you saw above, the reason we chose this particular configuration for 
NoiseChisel to detect the wings of the M51 group was strongly influenced by the 
noise properties of this particular image.
+So as long as your image noise has similar properties (from the same 
data-reduction step of the same database), you can use this configuration on 
any image.
+For images from other instruments, or higher-level/reduced SDSS products, 
please follow a similar logic to what was presented here and find the best 
configuration yourself.
 @end cartouche
 
 @cartouche
 @noindent
-@strong{Smart NoiseChisel:} As you saw during this section, there is a
-clear logic behind the optimal paramter value for each dataset. Therfore,
-we plan to capabilities to (optionally) automate some of the choices made
-here based on the actual dataset, please join us in doing this if you are
-interested. However, given the many problems in existing ``smart''
-solutions, such automatic changing of the configuration may cause more
-problems than they solve. So even when they are implemented, we would
-strongly recommend quality checks for a robust analysis.
+@strong{Smart NoiseChisel:} As you saw during this section, there is a clear 
logic behind the optimal parameter value for each dataset.
+Therefore, we plan to capabilities to (optionally) automate some of the 
choices made here based on the actual dataset, please join us in doing this if 
you are interested.
+However, given the many problems in existing ``smart'' solutions, such 
automatic changing of the configuration may cause more problems than they solve.
+So even when they are implemented, we would strongly recommend quality checks 
for a robust analysis.
 @end cartouche
 
 @node Achieved surface brightness level,  , NoiseChisel optimization, 
Detecting large extended targets
 @subsection Achieved surface brightness level
-In @ref{NoiseChisel optimization} we showed how to customize NoiseChisel
-for a single-exposure SDSS image of the M51 group. let's measure how deep
-we carved the signal out of noise. For this measurement, we'll need to
-estimate the average flux on the outer edges of the detection. Fortunately
-all this can be done with a few simple commands (and no higher-level
-language mini-environments like Python or IRAF) using @ref{Arithmetic} and
-@ref{MakeCatalog}.
+In @ref{NoiseChisel optimization} we showed how to customize NoiseChisel for a 
single-exposure SDSS image of the M51 group.
+Let's measure how deep we carved the signal out of noise.
+For this measurement, we'll need to estimate the average flux on the outer 
edges of the detection.
+Fortunately all this can be done with a few simple commands (and no 
higher-level language mini-environments like Python or IRAF) using 
@ref{Arithmetic} and @ref{MakeCatalog}.
 
 @cindex Opening
-First, let's separate each detected region, or give a unique label/counter
-to all the connected pixels of NoiseChisel's detection map:
+First, let's separate each detected region, or give a unique label/counter to 
all the connected pixels of NoiseChisel's detection map:
 
 @example
 $ det="r_detected.fits -hDETECTIONS"
 $ astarithmetic $det 2 connected-components -olabeled.fits
 @end example
 
-You can find the the label of the main galaxy visually (by opening the
-image and hovering your mouse over the M51 group's label). But to have a
-little more fun, lets do this automatically. The M51 group detection is by
-far the largest detection in this image, this allows us to find the
-ID/label that corresponds to it. We'll first run MakeCatalog to find the
-area of all the detections, then we'll use AWK to find the ID of the
-largest object and keep it as a shell variable (@code{id}):
+You can find the label of the main galaxy visually (by opening the image and 
hovering your mouse over the M51 group's label).
+But to have a little more fun, lets do this automatically.
+The M51 group detection is by far the largest detection in this image, this 
allows us to find the ID/label that corresponds to it.
+We'll first run MakeCatalog to find the area of all the detections, then we'll 
use AWK to find the ID of the largest object and keep it as a shell variable 
(@code{id}):
 
 @example
 $ astmkcatalog labeled.fits --ids --geoarea -h1 -ocat.txt
@@ -4733,10 +3614,9 @@ $ id=$(awk '!/^#/@{if($2>max) @{id=$1; max=$2@}@} 
END@{print id@}' cat.txt)
 $ echo $id
 @end example
 
-To separate the outer edges of the detections, we'll need to ``erode'' the
-M51 group detection. We'll erode thre times (to have more pixels and thus
-less scatter), using a maximum connectivity of 2 (8-connected
-neighbors). We'll then save the output in @file{eroded.fits}.
+To separate the outer edges of the detections, we'll need to ``erode'' the M51 
group detection.
+We'll erode three times (to have more pixels and thus less scatter), using a 
maximum connectivity of 2 (8-connected neighbors).
+We'll then save the output in @file{eroded.fits}.
 
 @example
 $ astarithmetic labeled.fits $id eq 2 erode 2 erode 2 erode \
@@ -4744,50 +3624,37 @@ $ astarithmetic labeled.fits $id eq 2 erode 2 erode 2 
erode \
 @end example
 
 @noindent
-In @file{labeled.fits}, we can now set all the 1-valued pixels of
-@file{eroded.fits} to 0 using Arithmetic's @code{where} operator added to
-the previous command. We'll need the pixels of the M51 group in
-@code{labeled.fits} two times: once to do the erosion, another time to find
-the outer pixel layer. To do this (and be efficient and more readable)
-we'll use the @code{set-i} operator. In the command below, it will
-save/set/name the pixels of the M51 group as the `@code{i}'. In this way we
-can use it any number of times afterwards, while only reading it from disk
-and finding M51's pixels once.
+In @file{labeled.fits}, we can now set all the 1-valued pixels of 
@file{eroded.fits} to 0 using Arithmetic's @code{where} operator added to the 
previous command.
+We'll need the pixels of the M51 group in @code{labeled.fits} two times: once 
to do the erosion, another time to find the outer pixel layer.
+To do this (and be efficient and more readable) we'll use the @code{set-i} 
operator.
+In the command below, it will save/set/name the pixels of the M51 group as the 
`@code{i}'.
+In this way we can use it any number of times afterwards, while only reading 
it from disk and finding M51's pixels once.
 
 @example
 $ astarithmetic labeled.fits $id eq set-i i \
                 i 2 erode 2 erode 2 erode 0 where -oedge.fits
 @end example
 
-Open the image and have a look. You'll see that the detected edge of the
-M51 group is now clearly visible. You can use @file{edge.fits} to mark
-(set to blank) this boundary on the input image and get a visual feeling of
-how far it extends:
+Open the image and have a look.
+You'll see that the detected edge of the M51 group is now clearly visible.
+You can use @file{edge.fits} to mark (set to blank) this boundary on the input 
image and get a visual feeling of how far it extends:
 
 @example
 $ astarithmetic r.fits edge.fits nan where -ob-masked.fits -h0
 @end example
 
-To quantify how deep we have detected the low-surface brightness regions,
-we'll use the command below. In short it just divides all the non-zero
-pixels of @file{edge.fits} in the Sky subtracted input (first extension
-of NoiseChisel's output) by the pixel standard deviation of the same
-pixel. This will give us a signal-to-noise ratio image. The mean value of
-this image shows the level of surface brightness that we have achieved.
-
-You can also break the command below into multiple calls to Arithmetic and
-create temporary files to understand it better. However, if you have a look
-at @ref{Reverse polish notation} and @ref{Arithmetic operators}, you should
-be able to easily understand what your computer does when you run this
-command@footnote{@file{edge.fits} (extension @code{1}) is a binary (0
-or 1 valued) image. Applying the @code{not} operator on it, just flips all
-its pixels. Through the @code{where} operator, we are setting all the newly
-1-valued pixels in @file{r_detected.fits} (extension @code{INPUT-NO-SKY})
-to NaN/blank. In the second line, we are dividing all the non-blank values
-by @file{r_detected.fits} (extension @code{SKY_STD}). This gives the
-signal-to-noise ratio for each of the pixels on the boundary. Finally, with
-the @code{meanvalue} operator, we are taking the mean value of all the
-non-blank pixels and reporting that as a single number.}.
+To quantify how deep we have detected the low-surface brightness regions, 
we'll use the command below.
+In short it just divides all the non-zero pixels of @file{edge.fits} in the 
Sky subtracted input (first extension of NoiseChisel's output) by the pixel 
standard deviation of the same pixel.
+This will give us a signal-to-noise ratio image.
+The mean value of this image shows the level of surface brightness that we 
have achieved.
+
+You can also break the command below into multiple calls to Arithmetic and 
create temporary files to understand it better.
+However, if you have a look at @ref{Reverse polish notation} and 
@ref{Arithmetic operators}, you should be able to easily understand what your 
computer does when you run this command@footnote{@file{edge.fits} (extension 
@code{1}) is a binary (0 or 1 valued) image.
+Applying the @code{not} operator on it, just flips all its pixels.
+Through the @code{where} operator, we are setting all the newly 1-valued 
pixels in @file{r_detected.fits} (extension @code{INPUT-NO-SKY}) to NaN/blank.
+In the second line, we are dividing all the non-blank values by 
@file{r_detected.fits} (extension @code{SKY_STD}).
+This gives the signal-to-noise ratio for each of the pixels on the boundary.
+Finally, with the @code{meanvalue} operator, we are taking the mean value of 
all the non-blank pixels and reporting that as a single number.}.
 
 @example
 $ edge="edge.fits -h1"
@@ -4798,14 +3665,10 @@ $ astarithmetic $skysub $skystd / $edge not nan where   
    \
 @end example
 
 @cindex Surface brightness
-We have thus detected the wings of the M51 group down to roughly 1/4th of
-the noise level in this image! But the signal-to-noise ratio is a relative
-measurement. Let's also measure the depth of our detection in absolute
-surface brightness units; or magnitudes per square arcseconds. To find out,
-we'll first need to calculate how many pixels of this image are in one
-arcsecond-squared. Fortunately the world coordinate system (or WCS) meta
-data of Gnuastro's output FITS files (in particular the @code{CDELT}
-keywords) give us this information.
+We have thus detected the wings of the M51 group down to roughly 1/4th of the 
noise level in this image! But the signal-to-noise ratio is a relative 
measurement.
+Let's also measure the depth of our detection in absolute surface brightness 
units; or magnitudes per square arcseconds.
+To find out, we'll first need to calculate how many pixels of this image are 
in one arcsecond-squared.
+Fortunately the world coordinate system (or WCS) meta data of Gnuastro's 
output FITS files (in particular the @code{CDELT} keywords) give us this 
information.
 
 @example
 $ pixscale=$(astfits r_detected.fits -h1                           \
@@ -4814,9 +3677,8 @@ $ echo $pixscale
 @end example
 
 @noindent
-Note that we multiplied the value by 3600 so we work in units of
-arc-seconds not degrees. Now, let's calculate the average sky-subtracted
-flux in the border region per pixel.
+Note that we multiplied the value by 3600 so we work in units of arc-seconds 
not degrees.
+Now, let's calculate the average sky-subtracted flux in the border region per 
pixel.
 
 @example
 $ f=$(astarithmetic r_detected.fits edge.fits not nan where set-i \
@@ -4825,15 +3687,11 @@ $ echo $f
 @end example
 
 @noindent
-We can just multiply the two to get the average flux on this border in one
-arcsecond squared. We also have the r-band SDSS zeropoint
-magnitude@footnote{From
-@url{http://classic.sdss.org/dr7/algorithms/fluxcal.html}} to be
-24.80. Therefore we can get the surface brightness of the outer edge (in
-magnitudes per arcsecond squared) using the following command. Just note
-that @code{log} in AWK is in base-2 (not 10), and that AWK doesn't have a
-@code{log10} operator. So we'll do an extra division by @code{log(10)} to
-correct for this.
+We can just multiply the two to get the average flux on this border in one 
arcsecond squared.
+We also have the r-band SDSS zeropoint magnitude@footnote{From 
@url{http://classic.sdss.org/dr7/algorithms/fluxcal.html}} to be 24.80.
+Therefore we can get the surface brightness of the outer edge (in magnitudes 
per arcsecond squared) using the following command.
+Just note that @code{log} in AWK is in base-2 (not 10), and that AWK doesn't 
have a @code{log10} operator.
+So we'll do an extra division by @code{log(10)} to correct for this.
 
 @example
 $ z=24.80
@@ -4841,26 +3699,16 @@ $ echo "$pixscale $f $z" | awk '@{print 
-2.5*log($1*$2)/log(10)+$3@}'
 --> 28.2989
 @end example
 
-On a single-exposure SDSS image, we have reached a surface brightness limit
-fainter than 28 magnitudes per arcseconds squared!
+On a single-exposure SDSS image, we have reached a surface brightness limit 
fainter than 28 magnitudes per arcseconds squared!
 
-In interpreting this value, you should just have in mind that NoiseChisel
-works based on the contiguity of signal in the pixels. Therefore the larger
-the object, the deeper NoiseChisel can carve it out of the noise. In other
-words, this reported depth, is only for this particular object and dataset,
-processed with this particular NoiseChisel configuration: if the M51 group
-in this image was larger/smaller than this, or if the image was
-larger/smaller, or if we had used a different configuration, we would go
-deeper/shallower.
+In interpreting this value, you should just have in mind that NoiseChisel 
works based on the contiguity of signal in the pixels.
+Therefore the larger the object, the deeper NoiseChisel can carve it out of 
the noise.
+In other words, this reported depth, is only for this particular object and 
dataset, processed with this particular NoiseChisel configuration: if the M51 
group in this image was larger/smaller than this, or if the image was 
larger/smaller, or if we had used a different configuration, we would go 
deeper/shallower.
 
-To avoid typing all these options every time you run NoiseChisel on this
-image, you can use Gnuastro's configuration files, see @ref{Configuration
-files}. For an applied example of setting/using them, see @ref{Option
-management and configuration files}.
+To avoid typing all these options every time you run NoiseChisel on this 
image, you can use Gnuastro's configuration files, see @ref{Configuration 
files}.
+For an applied example of setting/using them, see @ref{Option management and 
configuration files}.
 
-To continue your analysis of such datasets with extended emission, you can
-use @ref{Segment} to identify all the ``clumps'' over the diffuse regions:
-background galaxies and foreground stars.
+To continue your analysis of such datasets with extended emission, you can use 
@ref{Segment} to identify all the ``clumps'' over the diffuse regions: 
background galaxies and foreground stars.
 
 @example
 $ astsegment r_detected.fits
@@ -4868,25 +3716,17 @@ $ astsegment r_detected.fits
 
 @cindex DS9
 @cindex SAO DS9
-Open the output @file{r_detected_segmented.fits} as a multi-extension data
-cube like before and flip through the first and second extensions to see
-the detected clumps (all pixels with a value larger than 1). To optimize
-the parameters and make sure you have detected what you wanted, its highly
-recommended to visually inspect the detected clumps on the input image.
-
-For visual inspection, you can make a simple shell script like below. It
-will first call MakeCatalog to estimate the positions of the clumps, then
-make an SAO ds9 region file and open ds9 with the image and region
-file. Recall that in a shell script, the numeric variables (like @code{$1},
-@code{$2}, and @code{$3} in the example below) represent the arguments
-given to the script. But when used in the AWK arguments, they refer to
-column numbers.
-
-To create the shell script, using your favorite text editor, put the
-contents below into a file called @file{check-clumps.sh}. Recall that
-everything after a @code{#} is just comments to help you understand the
-command (so read them!). Also note that if you are copying from the PDF
-version of this book, fix the single quotes in the AWK command.
+Open the output @file{r_detected_segmented.fits} as a multi-extension data 
cube like before and flip through the first and second extensions to see the 
detected clumps (all pixels with a value larger than 1).
+To optimize the parameters and make sure you have detected what you wanted, 
its highly recommended to visually inspect the detected clumps on the input 
image.
+
+For visual inspection, you can make a simple shell script like below.
+It will first call MakeCatalog to estimate the positions of the clumps, then 
make an SAO ds9 region file and open ds9 with the image and region file.
+Recall that in a shell script, the numeric variables (like @code{$1}, 
@code{$2}, and @code{$3} in the example below) represent the arguments given to 
the script.
+But when used in the AWK arguments, they refer to column numbers.
+
+To create the shell script, using your favorite text editor, put the contents 
below into a file called @file{check-clumps.sh}.
+Recall that everything after a @code{#} is just comments to help you 
understand the command (so read them!).
+Also note that if you are copying from the PDF version of this book, fix the 
single quotes in the AWK command.
 
 @example
 #! /bin/bash
@@ -4915,9 +3755,8 @@ rm $1"_cat.fits" $1.reg
 @end example
 
 @noindent
-Finally, you just have to activate the script's executable flag with the
-command below. This will enable you to directly/easily call the script as a
-command.
+Finally, you just have to activate the script's executable flag with the 
command below.
+This will enable you to directly/easily call the script as a command.
 
 @example
 $ chmod +x check-clumps.sh
@@ -4925,72 +3764,47 @@ $ chmod +x check-clumps.sh
 
 @cindex AWK
 @cindex GNU AWK
-This script doesn't expect the @file{.fits} suffix of the input's filename
-as the first argument. Because the script produces intermediate files (a
-catalog and DS9 region file, which are later deleted). However, we don't
-want multiple instances of the script (on different files in the same
-directory) to collide (read/write to the same intermediate
-files). Therefore, we have used suffixes added to the input's name to
-identify the intermediate files. Note how all the @code{$1} instances in
-the commands (not within the AWK command@footnote{In AWK, @code{$1} refers
-to the first column, while in the shell script, it refers to the first
-argument.}) are followed by a suffix. If you want to keep the intermediate
-files, put a @code{#} at the start of the last line.
-
-The few, but high-valued, bright pixels in the central parts of the
-galaxies can hinder easy visual inspection of the fainter parts of the
-image. With the second and third arguments to this script, you can set the
-numerical values of the color map (first is minimum/black, second is
-maximum/white). You can call this script with any@footnote{Some
-modifications are necessary based on the input dataset: depending on the
-dynamic range, you have to adjust the second and third arguments. But more
-importantly, depending on the dataset's world coordinate system, you have
-to change the region @code{width}, in the AWK command. Otherwise the circle
-regions can be too small/large.} output of Segment (when
-@option{--rawoutput} is @emph{not} used) with a command like this:
+This script doesn't expect the @file{.fits} suffix of the input's filename as 
the first argument.
+Because the script produces intermediate files (a catalog and DS9 region file, 
which are later deleted).
+However, we don't want multiple instances of the script (on different files in 
the same directory) to collide (read/write to the same intermediate files).
+Therefore, we have used suffixes added to the input's name to identify the 
intermediate files.
+Note how all the @code{$1} instances in the commands (not within the AWK 
command@footnote{In AWK, @code{$1} refers to the first column, while in the 
shell script, it refers to the first argument.}) are followed by a suffix.
+If you want to keep the intermediate files, put a @code{#} at the start of the 
last line.
+
+The few, but high-valued, bright pixels in the central parts of the galaxies 
can hinder easy visual inspection of the fainter parts of the image.
+With the second and third arguments to this script, you can set the numerical 
values of the color map (first is minimum/black, second is maximum/white).
+You can call this script with any@footnote{Some modifications are necessary 
based on the input dataset: depending on the dynamic range, you have to adjust 
the second and third arguments.
+But more importantly, depending on the dataset's world coordinate system, you 
have to change the region @code{width}, in the AWK command.
+Otherwise the circle regions can be too small/large.} output of Segment (when 
@option{--rawoutput} is @emph{not} used) with a command like this:
 
 @example
 $ ./check-clumps.sh r_detected_segmented -0.1 2
 @end example
 
-Go ahead and run this command. You will see the intermediate processing
-being done and finally it opens SAO DS9 for you with the regions
-superimposed on all the extensions of Segment's output. The script will
-only finish (and give you control of the command-line) when you close
-DS9. If you need your access to the command-line before closing DS9, add a
-@code{&} after the end of the command above.
+Go ahead and run this command.
+You will see the intermediate processing being done and finally it opens SAO 
DS9 for you with the regions superimposed on all the extensions of Segment's 
output.
+The script will only finish (and give you control of the command-line) when 
you close DS9.
+If you need your access to the command-line before closing DS9, add a @code{&} 
after the end of the command above.
 
 @cindex Purity
 @cindex Completeness
-While DS9 is open, slide the dynamic range (values for black and white, or
-minimum/maximum values in different color schemes) and zoom into various
-regions of the M51 group to see if you are satisfied with the detected
-clumps. Don't forget that through the ``Cube'' window that is opened along
-with DS9, you can flip through the extensions and see the actual clumps
-also. The questions you should be asking your self are these: 1) Which real
-clumps (as you visually @emph{feel}) have been missed? In other words, is
-the @emph{completeness} good? 2) Are there any clumps which you @emph{feel}
-are false? In other words, is the @emph{purity} good?
-
-Note that completeness and purity are not independent of each other, they
-are anti-correlated: the higher your purity, the lower your completeness
-and vice-versa. You can see this by playing with the purity level using the
-@option{--snquant} option. Run Segment as shown above again with @code{-P}
-and see its default value. Then increase/decrease it for higher/lower
-purity and check the result as before. You will see that if you want the
-best purity, you have to sacrifice completeness and vice versa.
-
-One interesting region to inspect in this image is the many bright peaks
-around the central parts of M51. Zoom into that region and inspect how many
-of them have actually been detected as true clumps. Do you have a good
-balance between completeness and purity? Also look out far into the wings
-of the group and inspect the completeness and purity there.
-
-An easier way to inspect completeness (and only completeness) is to mask all
-the pixels detected as clumps and visually inspecting the rest of the
-pixels. You can do this using Arithmetic in a command like below. For easy
-reading of the command, we'll define the shell variable @code{i} for the
-image name and save the output in @file{masked.fits}.
+While DS9 is open, slide the dynamic range (values for black and white, or 
minimum/maximum values in different color schemes) and zoom into various 
regions of the M51 group to see if you are satisfied with the detected clumps.
+Don't forget that through the ``Cube'' window that is opened along with DS9, 
you can flip through the extensions and see the actual clumps also.
+The questions you should be asking your self are these: 1) Which real clumps 
(as you visually @emph{feel}) have been missed? In other words, is the 
@emph{completeness} good? 2) Are there any clumps which you @emph{feel} are 
false? In other words, is the @emph{purity} good?
+
+Note that completeness and purity are not independent of each other, they are 
anti-correlated: the higher your purity, the lower your completeness and 
vice-versa.
+You can see this by playing with the purity level using the @option{--snquant} 
option.
+Run Segment as shown above again with @code{-P} and see its default value.
+Then increase/decrease it for higher/lower purity and check the result as 
before.
+You will see that if you want the best purity, you have to sacrifice 
completeness and vice versa.
+
+One interesting region to inspect in this image is the many bright peaks 
around the central parts of M51.
+Zoom into that region and inspect how many of them have actually been detected 
as true clumps.
+Do you have a good balance between completeness and purity? Also look out far 
into the wings of the group and inspect the completeness and purity there.
+
+An easier way to inspect completeness (and only completeness) is to mask all 
the pixels detected as clumps and visually inspecting the rest of the pixels.
+You can do this using Arithmetic in a command like below.
+For easy reading of the command, we'll define the shell variable @code{i} for 
the image name and save the output in @file{masked.fits}.
 
 @example
 $ in="r_detected_segmented.fits -hINPUT"
@@ -4998,33 +3812,23 @@ $ clumps="r_detected_segmented.fits -hCLUMPS"
 $ astarithmetic $in $clumps 0 gt nan where -oclumps-masked.fits
 @end example
 
-Inspecting @file{clumps-masked.fits}, you can see some very diffuse peaks
-that have been missed, especially as you go farther away from the group
-center and into the diffuse wings. This is due to the fact that with this
-configuration, we have focused more on the sharper clumps. To put the focus
-more on diffuse clumps, you can use a wider convolution kernel. Using a
-larger kernel can also help in detecting the existing clumps to fainter
-levels (thus better separating them from the surrounding diffuse signal).
+Inspecting @file{clumps-masked.fits}, you can see some very diffuse peaks that 
have been missed, especially as you go farther away from the group center and 
into the diffuse wings.
+This is due to the fact that with this configuration, we have focused more on 
the sharper clumps.
+To put the focus more on diffuse clumps, you can use a wider convolution 
kernel.
+Using a larger kernel can also help in detecting the existing clumps to 
fainter levels (thus better separating them from the surrounding diffuse 
signal).
 
-You can make any kernel easily using the @option{--kernel} option in
-@ref{MakeProfiles}. But note that a larger kernel is also going to wash-out
-many of the sharp/small clumps close to the center of M51 and also some
-smaller peaks on the wings. Please continue playing with Segment's
-configuration to obtain a more complete result (while keeping reasonable
-purity). We'll finish the discussion on finding true clumps at this point.
+You can make any kernel easily using the @option{--kernel} option in 
@ref{MakeProfiles}.
+But note that a larger kernel is also going to wash-out many of the 
sharp/small clumps close to the center of M51 and also some smaller peaks on 
the wings.
+Please continue playing with Segment's configuration to obtain a more complete 
result (while keeping reasonable purity).
+We'll finish the discussion on finding true clumps at this point.
 
-The properties of the clumps within M51, or the background objects can then
-easily be measured using @ref{MakeCatalog}. To measure the properties of
-the background objects (detected as clumps over the diffuse region), you
-shouldn't mask the diffuse region. When measuring clump properties with
-@ref{MakeCatalog} and using the @option{--clumpscat}, the ambient flux
-(from the diffuse region) is calculated and subtracted. If the diffuse
-region is masked, its effect on the clump brightness cannot be calculated
-and subtracted.
+The properties of the clumps within M51, or the background objects can then 
easily be measured using @ref{MakeCatalog}.
+To measure the properties of the background objects (detected as clumps over 
the diffuse region), you shouldn't mask the diffuse region.
+When measuring clump properties with @ref{MakeCatalog} and using the 
@option{--clumpscat}, the ambient flux (from the diffuse region) is calculated 
and subtracted.
+If the diffuse region is masked, its effect on the clump brightness cannot be 
calculated and subtracted.
 
-To keep this tutorial short, we'll stop here. See @ref{Segmentation and
-making a catalog} and @ref{Segment} for more on using Segment, producing
-catalogs with MakeCatalog and using those catalogs.
+To keep this tutorial short, we'll stop here.
+See @ref{Segmentation and making a catalog} and @ref{Segment} for more on 
using Segment, producing catalogs with MakeCatalog and using those catalogs.
 
 
 
@@ -5042,44 +3846,25 @@ catalogs with MakeCatalog and using those catalogs.
 @c were seen to follow this ``Installation'' chapter title in search of the
 @c tarball and fast instructions.
 @cindex Installation
-The latest released version of Gnuastro source code is always available at
-the following URL:
+The latest released version of Gnuastro source code is always available at the 
following URL:
 
 @url{http://ftpmirror.gnu.org/gnuastro/gnuastro-latest.tar.gz}
 
 @noindent
-@ref{Quick start} describes the commands necessary to configure, build, and
-install Gnuastro on your system. This chapter will be useful in cases where
-the simple procedure above is not sufficient, for example your system lacks
-a mandatory/optional dependency (in other words, you can't pass the
-@command{$ ./configure} step), or you want greater customization, or you
-want to build and install Gnuastro from other random points in its history,
-or you want a higher level of control on the installation. Thus if you were
-happy with downloading the tarball and following @ref{Quick start}, then
-you can safely ignore this chapter and come back to it in the future if you
-need more customization.
-
-@ref{Dependencies} describes the mandatory, optional and bootstrapping
-dependencies of Gnuastro. Only the first group are required/mandatory when
-you are building Gnuastro using a tarball (see @ref{Release tarball}), they
-are very basic and low-level tools used in most astronomical software, so
-you might already have them installed, if not they are very easy to install
-as described for each. @ref{Downloading the source} discusses the two
-methods you can obtain the source code: as a tarball (a significant
-snapshot in Gnuastro's history), or the full
-history@footnote{@ref{Bootstrapping dependencies} are required if you clone
-the full history.}. The latter allows you to build Gnuastro at any random
-point in its history (for example to get bug fixes or new features that are
-not released as a tarball yet).
-
-The building and installation of Gnuastro is heavily customizable, to learn
-more about them, see @ref{Build and install}. This section is essentially a
-thorough explanation of the steps in @ref{Quick start}. It discusses ways
-you can influence the building and installation. If you encounter any
-problems in the installation process, it is probably already explained in
-@ref{Known issues}. In @ref{Other useful software} the installation and
-usage of some other free software that are not directly required by
-Gnuastro but might be useful in conjunction with it is discussed.
+@ref{Quick start} describes the commands necessary to configure, build, and 
install Gnuastro on your system.
+This chapter will be useful in cases where the simple procedure above is not 
sufficient, for example your system lacks a mandatory/optional dependency (in 
other words, you can't pass the @command{$ ./configure} step), or you want 
greater customization, or you want to build and install Gnuastro from other 
random points in its history, or you want a higher level of control on the 
installation.
+Thus if you were happy with downloading the tarball and following @ref{Quick 
start}, then you can safely ignore this chapter and come back to it in the 
future if you need more customization.
+
+@ref{Dependencies} describes the mandatory, optional and bootstrapping 
dependencies of Gnuastro.
+Only the first group are required/mandatory when you are building Gnuastro 
using a tarball (see @ref{Release tarball}), they are very basic and low-level 
tools used in most astronomical software, so you might already have them 
installed, if not they are very easy to install as described for each.
+@ref{Downloading the source} discusses the two methods you can obtain the 
source code: as a tarball (a significant snapshot in Gnuastro's history), or 
the full history@footnote{@ref{Bootstrapping dependencies} are required if you 
clone the full history.}.
+The latter allows you to build Gnuastro at any random point in its history 
(for example to get bug fixes or new features that are not released as a 
tarball yet).
+
+The building and installation of Gnuastro is heavily customizable, to learn 
more about them, see @ref{Build and install}.
+This section is essentially a thorough explanation of the steps in @ref{Quick 
start}.
+It discusses ways you can influence the building and installation.
+If you encounter any problems in the installation process, it is probably 
already explained in @ref{Known issues}.
+In @ref{Other useful software} the installation and usage of some other free 
software that are not directly required by Gnuastro but might be useful in 
conjunction with it is discussed.
 
 
 @menu
@@ -5094,25 +3879,16 @@ Gnuastro but might be useful in conjunction with it is 
discussed.
 @node Dependencies, Downloading the source, Installation, Installation
 @section Dependencies
 
-A minimal set of dependencies are mandatory for building Gnuastro from the
-standard tarball release. If they are not present you cannot pass
-Gnuastro's configuration step. The mandatory dependencies are therefore
-very basic (low-level) tools which are easy to obtain, build and install,
-see @ref{Mandatory dependencies} for a full discussion.
-
-If you have the packages of @ref{Optional dependencies}, Gnuastro will have
-additional functionality (for example converting FITS images to JPEG or
-PDF). If you are installing from a tarball as explained in @ref{Quick
-start}, you can stop reading after this section. If you are cloning the
-version controlled source (see @ref{Version controlled source}), an
-additional bootstrapping step is required before configuration and its
-dependencies are explained in @ref{Bootstrapping dependencies}.
-
-Your operating system's package manager is an easy and convenient way to
-download and install the dependencies that are already pre-built for your
-operating system. In @ref{Dependencies from package managers}, we'll list
-some common operating system package manager commands to install the
-optional and mandatory dependencies.
+A minimal set of dependencies are mandatory for building Gnuastro from the 
standard tarball release.
+If they are not present you cannot pass Gnuastro's configuration step.
+The mandatory dependencies are therefore very basic (low-level) tools which 
are easy to obtain, build and install, see @ref{Mandatory dependencies} for a 
full discussion.
+
+If you have the packages of @ref{Optional dependencies}, Gnuastro will have 
additional functionality (for example converting FITS images to JPEG or PDF).
+If you are installing from a tarball as explained in @ref{Quick start}, you 
can stop reading after this section.
+If you are cloning the version controlled source (see @ref{Version controlled 
source}), an additional bootstrapping step is required before configuration and 
its dependencies are explained in @ref{Bootstrapping dependencies}.
+
+Your operating system's package manager is an easy and convenient way to 
download and install the dependencies that are already pre-built for your 
operating system.
+In @ref{Dependencies from package managers}, we'll list some common operating 
system package manager commands to install the optional and mandatory 
dependencies.
 
 @menu
 * Mandatory dependencies::      Gnuastro will not install without these.
@@ -5126,11 +3902,9 @@ optional and mandatory dependencies.
 
 @cindex Dependencies, Gnuastro
 @cindex GNU build system
-The mandatory Gnuastro dependencies are very basic and low-level
-tools. They all follow the same basic GNU based build system (like that
-shown in @ref{Quick start}), so even if you don't have them, installing
-them should be pretty straightforward. In this section we explain each
-program and any specific note that might be necessary in the installation.
+The mandatory Gnuastro dependencies are very basic and low-level tools.
+They all follow the same basic GNU based build system (like that shown in 
@ref{Quick start}), so even if you don't have them, installing them should be 
pretty straightforward.
+In this section we explain each program and any specific note that might be 
necessary in the installation.
 
 
 @menu
@@ -5143,13 +3917,8 @@ program and any specific note that might be necessary in 
the installation.
 @subsubsection GNU Scientific library
 
 @cindex GNU Scientific Library
-The @url{http://www.gnu.org/software/gsl/, GNU Scientific Library}, or GSL,
-is a large collection of functions that are very useful in scientific
-applications, for example integration, random number generation, and Fast
-Fourier Transform among many others. To install GSL from source, you can
-run the following commands after you have downloaded
-@url{http://ftpmirror.gnu.org/gsl/gsl-latest.tar.gz,
-@file{gsl-latest.tar.gz}}:
+The @url{http://www.gnu.org/software/gsl/, GNU Scientific Library}, or GSL, is 
a large collection of functions that are very useful in scientific 
applications, for example integration, random number generation, and Fast 
Fourier Transform among many others.
+To install GSL from source, you can run the following commands after you have 
downloaded @url{http://ftpmirror.gnu.org/gsl/gsl-latest.tar.gz, 
@file{gsl-latest.tar.gz}}:
 
 @example
 $ tar xf gsl-latest.tar.gz
@@ -5165,55 +3934,33 @@ $ sudo make install
 
 @cindex CFITSIO
 @cindex FITS standard
-@url{http://heasarc.gsfc.nasa.gov/fitsio/, CFITSIO} is the closest you can
-get to the pixels in a FITS image while remaining faithful to the
-@url{http://fits.gsfc.nasa.gov/fits_standard.html, FITS standard}. It is
-written by William Pence, the principal author of the FITS
-standard@footnote{Pence, W.D. et al. Definition of the Flexible Image
-Transport System (FITS), version 3.0. (2010) Astronomy and Astrophysics,
-Volume 524, id.A42, 40 pp.}, and is regularly updated. Setting the
-definitions for all other software packages using FITS images.
+@url{http://heasarc.gsfc.nasa.gov/fitsio/, CFITSIO} is the closest you can get 
to the pixels in a FITS image while remaining faithful to the 
@url{http://fits.gsfc.nasa.gov/fits_standard.html, FITS standard}.
+It is written by William Pence, the principal author of the FITS 
standard@footnote{Pence, W.D. et al. Definition of the Flexible Image Transport 
System (FITS), version 3.0. (2010) Astronomy and Astrophysics, Volume 524, 
id.A42, 40 pp.}, and is regularly updated.
+Setting the definitions for all other software packages using FITS images.
 
 @vindex --enable-reentrant
 @cindex Reentrancy, multiple file opening
 @cindex Multiple file opening, reentrancy
-Some GNU/Linux distributions have CFITSIO in their package managers, if it
-is available and updated, you can use it. One problem that might occur is
-that CFITSIO might not be configured with the @option{--enable-reentrant}
-option by the distribution. This option allows CFITSIO to open a file in
-multiple threads, it can thus provide great speed improvements. If CFITSIO
-was not configured with this option, any program which needs this
-capability will warn you and abort when you ask for multiple threads (see
-@ref{Multi-threaded operations}).
-
-To install CFITSIO from source, we strongly recommend that you have a look
-through Chapter 2 (Creating the CFITSIO library) of the CFITSIO manual and
-understand the options you can pass to @command{$ ./configure} (they aren't
-too much). This is a very basic package for most astronomical software and
-it is best that you configure it nicely with your system. Once you download
-the source and unpack it, the following configure script should be enough
-for most purposes. Don't forget to read chapter two of the manual though,
-for example the second option is only for 64bit systems. The manual also
-explains how to check if it has been installed correctly.
-
-CFITSIO comes with two executable files called @command{fpack} and
-@command{funpack}. From their manual: they ``are standalone programs for
-compressing and uncompressing images and tables that are stored in the FITS
-(Flexible Image Transport System) data format. They are analogous to the
-gzip and gunzip compression programs except that they are optimized for the
-types of astronomical images that are often stored in FITS format''. The
-commands below will compile and install them on your system along with
-CFITSIO. They are not essential for Gnuastro, since they are just wrappers
-for functions within CFITSIO, but they can come in handy. The @command{make
-utils} command is only available for versions above 3.39, it will build
-these executable files along with several other executable test files which
-are deleted in the following commands before the installation (otherwise
-the test files will also be installed).
-
-The commands necessary to decompress, build and install CFITSIO from source
-are described below. Let's assume you have downloaded
-@url{http://heasarc.gsfc.nasa.gov/FTP/software/fitsio/c/cfitsio_latest.tar.gz,
-@file{cfitsio_latest.tar.gz}} and are in the same directory:
+Some GNU/Linux distributions have CFITSIO in their package managers, if it is 
available and updated, you can use it.
+One problem that might occur is that CFITSIO might not be configured with the 
@option{--enable-reentrant} option by the distribution.
+This option allows CFITSIO to open a file in multiple threads, it can thus 
provide great speed improvements.
+If CFITSIO was not configured with this option, any program which needs this 
capability will warn you and abort when you ask for multiple threads (see 
@ref{Multi-threaded operations}).
+
+To install CFITSIO from source, we strongly recommend that you have a look 
through Chapter 2 (Creating the CFITSIO library) of the CFITSIO manual and 
understand the options you can pass to @command{$ ./configure} (they aren't too 
much).
+This is a very basic package for most astronomical software and it is best 
that you configure it nicely with your system.
+Once you download the source and unpack it, the following configure script 
should be enough for most purposes.
+Don't forget to read chapter two of the manual though, for example the second 
option is only for 64bit systems.
+The manual also explains how to check if it has been installed correctly.
+
+CFITSIO comes with two executable files called @command{fpack} and 
@command{funpack}.
+From their manual: they ``are standalone programs for compressing and 
uncompressing images and tables that are stored in the FITS (Flexible Image 
Transport System) data format.
+They are analogous to the gzip and gunzip compression programs except that 
they are optimized for the types of astronomical images that are often stored 
in FITS format''.
+The commands below will compile and install them on your system along with 
CFITSIO.
+They are not essential for Gnuastro, since they are just wrappers for 
functions within CFITSIO, but they can come in handy.
+The @command{make utils} command is only available for versions above 3.39, it 
will build these executable files along with several other executable test 
files which are deleted in the following commands before the installation 
(otherwise the test files will also be installed).
+
+The commands necessary to decompress, build and install CFITSIO from source 
are described below.
+Let's assume you have downloaded 
@url{http://heasarc.gsfc.nasa.gov/FTP/software/fitsio/c/cfitsio_latest.tar.gz, 
@file{cfitsio_latest.tar.gz}} and are in the same directory:
 
 @example
 $ tar xf cfitsio_latest.tar.gz
@@ -5238,40 +3985,23 @@ $ sudo make install
 @cindex WCS
 @cindex WCSLIB
 @cindex World Coordinate System
-@url{http://www.atnf.csiro.au/people/mcalabre/WCS/, WCSLIB} is written and
-maintained by one of the authors of the World Coordinate System (WCS)
-definition in the @url{http://fits.gsfc.nasa.gov/fits_standard.html, FITS
-standard}@footnote{Greisen E.W., Calabretta M.R. (2002) Representation of
-world coordinates in FITS. Astronomy and Astrophysics, 395, 1061-1075.},
-Mark Calabretta. It might be already built and ready in your distribution's
-package management system. However, here the installation from source is
-explained, for the advantages of installation from source please see
-@ref{Mandatory dependencies}. To install WCSLIB you will need to have
-CFITSIO already installed, see @ref{CFITSIO}.
+@url{http://www.atnf.csiro.au/people/mcalabre/WCS/, WCSLIB} is written and 
maintained by one of the authors of the World Coordinate System (WCS) 
definition in the @url{http://fits.gsfc.nasa.gov/fits_standard.html, FITS 
standard}@footnote{Greisen E.W., Calabretta M.R. (2002) Representation of world 
coordinates in FITS.
+Astronomy and Astrophysics, 395, 1061-1075.}, Mark Calabretta.
+It might be already built and ready in your distribution's package management 
system.
+However, here the installation from source is explained, for the advantages of 
installation from source please see @ref{Mandatory dependencies}.
+To install WCSLIB you will need to have CFITSIO already installed, see 
@ref{CFITSIO}.
 
 @vindex --without-pgplot
-WCSLIB also has plotting capabilities which use PGPLOT (a plotting library
-for C). If you wan to use those capabilities in WCSLIB, @ref{PGPLOT}
-provides the PGPLOT installation instructions. However PGPLOT is
-old@footnote{As of early June 2016, its most recent version was uploaded in
-February 2001.}, so its installation is not easy, there are also many great
-modern WCS plotting tools (mostly in written in Python). Hence, if you will
-not be using those plotting functions in WCSLIB, you can configure it with
-the @option{--without-pgplot} option as shown below.
-
-If you have the cURL library @footnote{@url{https://curl.haxx.se}} on your
-system and you installed CFITSIO version 3.42 or later, you will need to
-also link with the cURL library at configure time (through the
-@code{-lcurl} option as shown below). CFITSIO uses the cURL library for its
-HTTPS (or HTTP Secure@footnote{@url{https://en.wikipedia.org/wiki/HTTPS}})
-support and if it is present on your system, CFITSIO will depend on
-it. Therefore, if @command{./configure} command below fails (you don't have
-the cURL library), then remove this option and rerun it.
-
-Let's assume you have downloaded
-@url{ftp://ftp.atnf.csiro.au/pub/software/wcslib/wcslib.tar.bz2,
-@file{wcslib.tar.bz2}} and are in the same directory, to configure, build,
-check and install WCSLIB follow the steps below.
+WCSLIB also has plotting capabilities which use PGPLOT (a plotting library for 
C).
+If you wan to use those capabilities in WCSLIB, @ref{PGPLOT} provides the 
PGPLOT installation instructions.
+However PGPLOT is old@footnote{As of early June 2016, its most recent version 
was uploaded in February 2001.}, so its installation is not easy, there are 
also many great modern WCS plotting tools (mostly in written in Python).
+Hence, if you will not be using those plotting functions in WCSLIB, you can 
configure it with the @option{--without-pgplot} option as shown below.
+
+If you have the cURL library @footnote{@url{https://curl.haxx.se}} on your 
system and you installed CFITSIO version 3.42 or later, you will need to also 
link with the cURL library at configure time (through the @code{-lcurl} option 
as shown below).
+CFITSIO uses the cURL library for its HTTPS (or HTTP 
Secure@footnote{@url{https://en.wikipedia.org/wiki/HTTPS}}) support and if it 
is present on your system, CFITSIO will depend on it.
+Therefore, if @command{./configure} command below fails (you don't have the 
cURL library), then remove this option and rerun it.
+
+Let's assume you have downloaded 
@url{ftp://ftp.atnf.csiro.au/pub/software/wcslib/wcslib.tar.bz2, 
@file{wcslib.tar.bz2}} and are in the same directory, to configure, build, 
check and install WCSLIB follow the steps below.
 @example
 $ tar xf wcslib.tar.bz2
 
@@ -5291,102 +4021,68 @@ $ sudo make install
 @node Optional dependencies, Bootstrapping dependencies, Mandatory 
dependencies, Dependencies
 @subsection Optional dependencies
 
-The libraries listed here are only used for very specific applications,
-therefore if you don't want these operations, Gnuastro will be built and
-installed without them and you don't have to have the dependencies.
+The libraries listed here are only used for very specific applications, 
therefore if you don't want these operations, Gnuastro will be built and 
installed without them and you don't have to have the dependencies.
 
 @cindex GPL Ghostscript
-If the @command{./configure} script can't find these requirements, it will
-warn you in the end that they are not present and notify you of the
-operation(s) you can't do due to not having them. If the output you request
-from a program requires a missing library, that program is going to warn
-you and abort. In the case of program dependencies (like GPL GhostScript),
-if you install them at a later time, the program will run. This is because
-if required libraries are not present at build time, the executables cannot
-be built, but an executable is called by the built program at run time so
-if it becomes available, it will be used. If you do install an optional
-library later, you will have to rebuild Gnuastro and reinstall it for it to
-take effect.
+If the @command{./configure} script can't find these requirements, it will 
warn you in the end that they are not present and notify you of the 
operation(s) you can't do due to not having them.
+If the output you request from a program requires a missing library, that 
program is going to warn you and abort.
+In the case of program dependencies (like GPL GhostScript), if you install 
them at a later time, the program will run.
+This is because if required libraries are not present at build time, the 
executables cannot be built, but an executable is called by the built program 
at run time so if it becomes available, it will be used.
+If you do install an optional library later, you will have to rebuild Gnuastro 
and reinstall it for it to take effect.
 
 @table @asis
 
 @item GNU Libtool
 @cindex GNU Libtool
-Libtool is a program to simplify managing of the libraries to build an
-executable (a program). GNU Libtool has some added functionality compared
-to other implementations. If GNU Libtool isn't present on your system at
-configuration time, a warning will be printed and @ref{BuildProgram} won't
-be built or installed. The configure script will look into your search path
-(@code{PATH}) for GNU Libtool through the following executable names:
-@command{libtool} (acceptable only if it is the GNU implementation) or
-@command{glibtool}. See @ref{Installation directory} for more on
-@code{PATH}.
-
-GNU Libtool (the binary/executable file) is a low-level program that is
-probably already present on your system, and if not, is available in your
-operating system package manager@footnote{Note that we want the
-binary/executable Libtool program which can be run on the command-line. In
-Debian-based operating systems which separate various parts of a package,
-you want want @code{libtool-bin}, the @code{libtool} package won't contain
-the executable program.}. If you want to install GNU Libtool's latest
-version from source, please visit its
-@url{https://www.gnu.org/software/libtool/, webpage}.
-
-Gnuastro's tarball is shipped with an internal implementation of GNU
-Libtool. Even if you have GNU Libtool, Gnuastro's internal implementation
-is used for the building and installation of Gnuastro. As a result, you can
-still build, install and use Gnuastro even if you don't have GNU Libtool
-installed on your system. However, this internal Libtool does not get
-installed. Therefore, after Gnuastro's installation, if you want to use
-@ref{BuildProgram} to compile and link your own C source code which uses
-the @ref{Gnuastro library}, you need to have GNU Libtool available on your
-system (independent of Gnuastro). See @ref{Review of library fundamentals}
-to learn more about libraries.
+Libtool is a program to simplify managing of the libraries to build an 
executable (a program).
+GNU Libtool has some added functionality compared to other implementations.
+If GNU Libtool isn't present on your system at configuration time, a warning 
will be printed and @ref{BuildProgram} won't be built or installed.
+The configure script will look into your search path (@code{PATH}) for GNU 
Libtool through the following executable names: @command{libtool} (acceptable 
only if it is the GNU implementation) or @command{glibtool}.
+See @ref{Installation directory} for more on @code{PATH}.
+
+GNU Libtool (the binary/executable file) is a low-level program that is 
probably already present on your system, and if not, is available in your 
operating system package manager@footnote{Note that we want the 
binary/executable Libtool program which can be run on the command-line.
+In Debian-based operating systems which separate various parts of a package, 
you want want @code{libtool-bin}, the @code{libtool} package won't contain the 
executable program.}.
+If you want to install GNU Libtool's latest version from source, please visit 
its @url{https://www.gnu.org/software/libtool/, webpage}.
+
+Gnuastro's tarball is shipped with an internal implementation of GNU Libtool.
+Even if you have GNU Libtool, Gnuastro's internal implementation is used for 
the building and installation of Gnuastro.
+As a result, you can still build, install and use Gnuastro even if you don't 
have GNU Libtool installed on your system.
+However, this internal Libtool does not get installed.
+Therefore, after Gnuastro's installation, if you want to use 
@ref{BuildProgram} to compile and link your own C source code which uses the 
@ref{Gnuastro library}, you need to have GNU Libtool available on your system 
(independent of Gnuastro).
+See @ref{Review of library fundamentals} to learn more about libraries.
 
 @item libgit2
 @cindex Git
-@cindex libgit2
+@pindex libgit2
 @cindex Version control systems
-Git is one of the most common version control systems (see @ref{Version
-controlled source}). When @file{libgit2} is present, and Gnuastro's
-programs are run within a version controlled directory, outputs will
-contain the version number of the working directory's repository for future
-reproducibility. See the @command{COMMIT} keyword header in @ref{Output
-FITS files} for a discussion.
+Git is one of the most common version control systems (see @ref{Version 
controlled source}).
+When @file{libgit2} is present, and Gnuastro's programs are run within a 
version controlled directory, outputs will contain the version number of the 
working directory's repository for future reproducibility.
+See the @command{COMMIT} keyword header in @ref{Output FITS files} for a 
discussion.
 
 @item libjpeg
 @pindex libjpeg
 @cindex JPEG format
-libjpeg is only used by ConvertType to read from and write to JPEG images,
-see @ref{Recognized file formats}. @url{http://www.ijg.org/, libjpeg} is a
-very basic library that provides tools to read and write JPEG images, most
-Unix-like graphic programs and libraries use it. Therefore you most
-probably already have it installed.
-@url{http://libjpeg-turbo.virtualgl.org/, libjpeg-turbo} is an alternative
-to libjpeg. It uses Single instruction, multiple data (SIMD) instructions
-for ARM based systems that significantly decreases the processing time of
-JPEG compression and decompression algorithms.
+libjpeg is only used by ConvertType to read from and write to JPEG images, see 
@ref{Recognized file formats}.
+@url{http://www.ijg.org/, libjpeg} is a very basic library that provides tools 
to read and write JPEG images, most Unix-like graphic programs and libraries 
use it.
+Therefore you most probably already have it installed.
+@url{http://libjpeg-turbo.virtualgl.org/, libjpeg-turbo} is an alternative to 
libjpeg.
+It uses Single instruction, multiple data (SIMD) instructions for ARM based 
systems that significantly decreases the processing time of JPEG compression 
and decompression algorithms.
 
 @item libtiff
 @pindex libtiff
 @cindex TIFF format
-libtiff is used by ConvertType and the libraries to read TIFF images, see
-@ref{Recognized file formats}. @url{http://www.simplesystems.org/libtiff/,
-libtiff} is a very basic library that provides tools to read and write TIFF
-images, most Unix-like operating system graphic programs and libraries use
-it. Therefore even if you don't have it installed, it must be easily
-available in your package manager.
+libtiff is used by ConvertType and the libraries to read TIFF images, see 
@ref{Recognized file formats}.
+@url{http://www.simplesystems.org/libtiff/, libtiff} is a very basic library 
that provides tools to read and write TIFF images, most Unix-like operating 
system graphic programs and libraries use it.
+Therefore even if you don't have it installed, it must be easily available in 
your package manager.
 
 
 @item GPL Ghostscript
 @cindex GPL Ghostscript
-GPL Ghostscript's executable (@command{gs}) is called by ConvertType to
-compile a PDF file from a source PostScript file, see
-@ref{ConvertType}. Therefore its headers (and libraries) are not
-needed. With a very high probability you already have it in your GNU/Linux
-distribution. Unfortunately it does not follow the standard GNU build style
-so installing it is very hard. It is best to rely on your distribution's
-package managers for this.
+GPL Ghostscript's executable (@command{gs}) is called by ConvertType to 
compile a PDF file from a source PostScript file, see @ref{ConvertType}.
+Therefore its headers (and libraries) are not needed.
+With a very high probability you already have it in your GNU/Linux 
distribution.
+Unfortunately it does not follow the standard GNU build style so installing it 
is very hard.
+It is best to rely on your distribution's package managers for this.
 
 @end table
 
@@ -5396,26 +4092,15 @@ package managers for this.
 @node Bootstrapping dependencies, Dependencies from package managers, Optional 
dependencies, Dependencies
 @subsection Bootstrapping dependencies
 
-Bootstrapping is only necessary if you have decided to obtain the full
-version controlled history of Gnuastro, see @ref{Version controlled source}
-and @ref{Bootstrapping}. Using the version controlled source enables you to
-always be up to date with the most recent development work of Gnuastro (bug
-fixes, new functionalities, improved algorithms and etc). If you have
-downloaded a tarball (see @ref{Downloading the source}), then you can
-ignore this subsection.
-
-To successfully run the bootstrapping process, there are some additional
-dependencies to those discussed in the previous subsections. These are low
-level tools that are used by a large collection of Unix-like operating
-systems programs, therefore they are most probably already available in
-your system. If they are not already installed, you should be able to
-easily find them in any GNU/Linux distribution package management system
-(@command{apt-get}, @command{yum}, @command{pacman} and etc). The short
-names in parenthesis in @command{typewriter} font after the package name
-can be used to search for them in your package manager. For the GNU
-Portability Library, GNU Autoconf Archive and @TeX{} Live, it is
-recommended to use the instructions here, not your operating system's
-package manager.
+Bootstrapping is only necessary if you have decided to obtain the full version 
controlled history of Gnuastro, see @ref{Version controlled source} and 
@ref{Bootstrapping}.
+Using the version controlled source enables you to always be up to date with 
the most recent development work of Gnuastro (bug fixes, new functionalities, 
improved algorithms, etc).
+If you have downloaded a tarball (see @ref{Downloading the source}), then you 
can ignore this subsection.
+
+To successfully run the bootstrapping process, there are some additional 
dependencies to those discussed in the previous subsections.
+These are low level tools that are used by a large collection of Unix-like 
operating systems programs, therefore they are most probably already available 
in your system.
+If they are not already installed, you should be able to easily find them in 
any GNU/Linux distribution package management system (@command{apt-get}, 
@command{yum}, @command{pacman}, etc).
+The short names in parenthesis in @command{typewriter} font after the package 
name can be used to search for them in your package manager.
+For the GNU Portability Library, GNU Autoconf Archive and @TeX{} Live, it is 
recommended to use the instructions here, not your operating system's package 
manager.
 
 @table @asis
 
@@ -5423,34 +4108,17 @@ package manager.
 @cindex GNU C library
 @cindex Gnulib: GNU Portability Library
 @cindex GNU Portability Library (Gnulib)
-To ensure portability for a wider range of operating systems (those that
-don't include GNU C library, namely glibc), Gnuastro depends on the GNU
-portability library, or Gnulib. Gnulib keeps a copy of all the functions in
-glibc, implemented (as much as possible) to be portable to other operating
-systems. The @file{bootstrap} script can automatically clone Gnulib (as a
-@file{gnulib/} directory inside Gnuastro), however, as described in
-@ref{Bootstrapping} this is not recommended.
-
-The recommended way to bootstrap Gnuastro is to first clone Gnulib and the
-Autoconf archives (see below) into a local directory outside of
-Gnuastro. Let's call it @file{DEVDIR}@footnote{If you are not a developer
-in Gnulib or Autoconf archives, @file{DEVDIR} can be a directory that you
-don't backup. In this way the large number of files in these projects won't
-slow down your backup process or take bandwidth (if you backup to a remote
-server).}  (which you can set to any directory). Currently in Gnuastro,
-both Gnulib and Autoconf archives have to be cloned in the same top
-directory@footnote{If you already have the Autoconf archives in a separate
-directory, or can't clone it in the same directory as Gnulib, or you have
-it with another directory name (not @file{autoconf-archive/}), you can
-follow this short step. Set @file{AUTOCONFARCHIVES} to your desired
-address. Then define a symbolic link in @file{DEVDIR} with the following
-command so Gnuastro's bootstrap script can find it:@*@command{$ ln -s
-$AUTOCONFARCHIVES $DEVDIR/autoconf-archive}.} like the case
-here@footnote{If your internet connection is active, but Git complains
-about the network, it might be due to your network setup not recognizing
-the git protocol. In that case use the following URL for the HTTP protocol
-instead (for Autoconf archives, replace the name):
-@command{http://git.sv.gnu.org/r/gnulib.git}}:
+To ensure portability for a wider range of operating systems (those that don't 
include GNU C library, namely glibc), Gnuastro depends on the GNU portability 
library, or Gnulib.
+Gnulib keeps a copy of all the functions in glibc, implemented (as much as 
possible) to be portable to other operating systems.
+The @file{bootstrap} script can automatically clone Gnulib (as a 
@file{gnulib/} directory inside Gnuastro), however, as described in 
@ref{Bootstrapping} this is not recommended.
+
+The recommended way to bootstrap Gnuastro is to first clone Gnulib and the 
Autoconf archives (see below) into a local directory outside of Gnuastro.
+Let's call it @file{DEVDIR}@footnote{If you are not a developer in Gnulib or 
Autoconf archives, @file{DEVDIR} can be a directory that you don't backup.
+In this way the large number of files in these projects won't slow down your 
backup process or take bandwidth (if you backup to a remote server).}  (which 
you can set to any directory).
+Currently in Gnuastro, both Gnulib and Autoconf archives have to be cloned in 
the same top directory@footnote{If you already have the Autoconf archives in a 
separate directory, or can't clone it in the same directory as Gnulib, or you 
have it with another directory name (not @file{autoconf-archive/}), you can 
follow this short step.
+Set @file{AUTOCONFARCHIVES} to your desired address.
+Then define a symbolic link in @file{DEVDIR} with the following command so 
Gnuastro's bootstrap script can find it:@*@command{$ ln -s $AUTOCONFARCHIVES 
$DEVDIR/autoconf-archive}.} like the case here@footnote{If your internet 
connection is active, but Git complains about the network, it might be due to 
your network setup not recognizing the git protocol.
+In that case use the following URL for the HTTP protocol instead (for Autoconf 
archives, replace the name): @command{http://git.sv.gnu.org/r/gnulib.git}}:
 
 @example
 $ DEVDIR=/home/yourname/Development
@@ -5460,35 +4128,28 @@ $ git clone git://git.sv.gnu.org/autoconf-archive.git
 @end example
 
 @noindent
-You now have the full version controlled source of these two repositories
-in separate directories. Both these packages are regularly updated, so
-every once in a while, you can run @command{$ git pull} within them to get
-any possible updates.
+You now have the full version controlled source of these two repositories in 
separate directories.
+Both these packages are regularly updated, so every once in a while, you can 
run @command{$ git pull} within them to get any possible updates.
 
 @item GNU Automake (@command{automake})
 @cindex GNU Automake
-GNU Automake will build the @file{Makefile.in} files in each sub-directory
-using the (hand-written) @file{Makefile.am} files. The @file{Makefile.in}s
-are subsequently used to generate the @file{Makefile}s when the user runs
-@command{./configure} before building.
+GNU Automake will build the @file{Makefile.in} files in each sub-directory 
using the (hand-written) @file{Makefile.am} files.
+The @file{Makefile.in}s are subsequently used to generate the @file{Makefile}s 
when the user runs @command{./configure} before building.
 
 @item GNU Autoconf (@command{autoconf})
 @cindex GNU Autoconf
-GNU Autoconf will build the @file{configure} script using the
-configurations we have defined (hand-written) in @file{configure.ac}.
+GNU Autoconf will build the @file{configure} script using the configurations 
we have defined (hand-written) in @file{configure.ac}.
 
 @item GNU Autoconf Archive
 @cindex GNU Autoconf Archive
-These are a large collection of tests that can be called to run at
-@command{./configure} time. See the explanation under GNU Portability
-Library above for instructions on obtaining it and keeping it up to date.
+These are a large collection of tests that can be called to run at 
@command{./configure} time.
+See the explanation under GNU Portability Library above for instructions on 
obtaining it and keeping it up to date.
 
 @item GNU Libtool (@command{libtool})
 @cindex GNU Libtool
-GNU Libtool is in charge of building all the libraries in Gnuastro. The
-libraries contain functions that are used by more than one program and are
-installed for use in other programs. They are thus put in a separate
-directory (@file{lib/}).
+GNU Libtool is in charge of building all the libraries in Gnuastro.
+The libraries contain functions that are used by more than one program and are 
installed for use in other programs.
+They are thus put in a separate directory (@file{lib/}).
 
 @item GNU help2man (@command{help2man})
 @cindex GNU help2man
@@ -5498,36 +4159,20 @@ GNU help2man is used to convert the output of the 
@option{--help} option
 @item @LaTeX{} and some @TeX{} packages
 @cindex @LaTeX{}
 @cindex @TeX{} Live
-Some of the figures in this book are built by @LaTeX{} (using the PGF/TikZ
-package). The @LaTeX{} source for those figures is version controlled for
-easy maintenance not the actual figures. So the @file{./boostrap} script
-will run @LaTeX{} to build the figures. The best way to install @LaTeX{}
-and all the necessary packages is through
-@url{https://www.tug.org/texlive/, @TeX{} live} which is a package manager
-for @TeX{} related tools that is independent of any operating system. It is
-thus preferred to the @TeX{} Live versions distributed by your operating
-system.
-
-To install @TeX{} Live, go to the webpage and download the appropriate
-installer by following the ``download'' link. Note that by default the full
-package repository will be downloaded and installed (around 4 Giga Bytes)
-which can take @emph{very} long to download and to update later. However,
-most packages are not needed by everyone, it is easier, faster and better
-to install only the ``Basic scheme'' (consisting of only the most basic
-@TeX{} and @LaTeX{} packages, which is less than 200 Mega
-bytes)@footnote{You can also download the DVD iso file at a later time to
-keep as a backup for when you don't have internet connection if you need a
-package.}.
-
-After the installation, be sure to set the environment variables as
-suggested in the end of the outputs. Any time you confront (need) a package
-you don't have, simply install it with a command like below (similar to how
-you install software from your operating system's package
-manager)@footnote{After running @TeX{}, or @LaTeX{}, you might get a
-warning complaining about a @file{missingfile}. Run `@command{tlmgr info
-missingfile}' to see the package(s) containing that file which you can
-install.}. To install all the necessary @TeX{} packages for a successful
-Gnuastro bootstrap, run this command:
+Some of the figures in this book are built by @LaTeX{} (using the PGF/TikZ 
package).
+The @LaTeX{} source for those figures is version controlled for easy 
maintenance not the actual figures.
+So the @file{./boostrap} script will run @LaTeX{} to build the figures.
+The best way to install @LaTeX{} and all the necessary packages is through 
@url{https://www.tug.org/texlive/, @TeX{} live} which is a package manager for 
@TeX{} related tools that is independent of any operating system.
+It is thus preferred to the @TeX{} Live versions distributed by your operating 
system.
+
+To install @TeX{} Live, go to the webpage and download the appropriate 
installer by following the ``download'' link.
+Note that by default the full package repository will be downloaded and 
installed (around 4 Giga Bytes) which can take @emph{very} long to download and 
to update later.
+However, most packages are not needed by everyone, it is easier, faster and 
better to install only the ``Basic scheme'' (consisting of only the most basic 
@TeX{} and @LaTeX{} packages, which is less than 200 Mega bytes)@footnote{You 
can also download the DVD iso file at a later time to keep as a backup for when 
you don't have internet connection if you need a package.}.
+
+After the installation, be sure to set the environment variables as suggested 
in the end of the outputs.
+Any time you confront (need) a package you don't have, simply install it with 
a command like below (similar to how you install software from your operating 
system's package manager)@footnote{After running @TeX{}, or @LaTeX{}, you might 
get a warning complaining about a @file{missingfile}.
+Run `@command{tlmgr info missingfile}' to see the package(s) containing that 
file which you can install.}.
+To install all the necessary @TeX{} packages for a successful Gnuastro 
bootstrap, run this command:
 
 @example
 $ su
@@ -5538,9 +4183,8 @@ $ su
 
 @item ImageMagick (@command{imagemagick})
 @cindex ImageMagick
-ImageMagick is a wonderful and robust program for image manipulation on the
-command-line. @file{bootsrap} uses it to convert the book images into the
-formats necessary for the various book formats.
+ImageMagick is a wonderful and robust program for image manipulation on the 
command-line.
+@file{bootstrap} uses it to convert the book images into the formats necessary 
for the various book formats.
 
 @end table
 
@@ -5555,13 +4199,11 @@ formats necessary for the various book formats.
 @cindex Compiling from source
 @cindex Source code compilation
 @cindex Distributions, GNU/Linux
-The most basic way to install a package on your system is to build the
-packages from source yourself. Alternatively, you can use your operating
-system's package manager to download pre-compiled files and install
-them. The latter choice is easier and faster. However, we recommend that
-you build the @ref{Mandatory dependencies} yourself from source (all
-necessary commands and links are given in the respective section). Here are
-some basic reasons behind this recommendation.
+The most basic way to install a package on your system is to build the 
packages from source yourself.
+Alternatively, you can use your operating system's package manager to download 
pre-compiled files and install them.
+The latter choice is easier and faster.
+However, we recommend that you build the @ref{Mandatory dependencies} yourself 
from source (all necessary commands and links are given in the respective 
section).
+Here are some basic reasons behind this recommendation.
 
 @enumerate
 
@@ -5570,46 +4212,32 @@ Your distribution's pre-built package might not be the 
most recent
 release.
 
 @item
-For each package, Gnuastro might preform better (or require) certain
-configuration options that your distribution's package managers didn't add
-for you. If present, these configuration options are explained during the
-installation of each in the sections below (for example in
-@ref{CFITSIO}). When the proper configuration has not been set, the
-programs should complain and inform you.
+For each package, Gnuastro might preform better (or require) certain 
configuration options that your distribution's package managers didn't add for 
you.
+If present, these configuration options are explained during the installation 
of each in the sections below (for example in @ref{CFITSIO}).
+When the proper configuration has not been set, the programs should complain 
and inform you.
 
 @item
-For the libraries, they might separate the binary file from the header
-files which can cause confusion, see @ref{Known issues}.
+For the libraries, they might separate the binary file from the header files 
which can cause confusion, see @ref{Known issues}.
 
 @item
-Like any other tool, the science you derive from Gnuastro's tools highly
-depend on these lower level dependencies, so generally it is much better to
-have a close connection with them. By reading their manuals, installing
-them and staying up to date with changes/bugs in them, your scientific
-results and understanding (of what is going on, and thus how you interpret
-your scientific results) will also correspondingly improve.
+Like any other tool, the science you derive from Gnuastro's tools highly 
depend on these lower level dependencies, so generally it is much better to 
have a close connection with them.
+By reading their manuals, installing them and staying up to date with 
changes/bugs in them, your scientific results and understanding (of what is 
going on, and thus how you interpret your scientific results) will also 
correspondingly improve.
 @end enumerate
 
-Based on your package manager, you can use any of the following commands to
-install the mandatory and optional dependencies. If your package manager
-isn't included in the list below, please send us the respective command, so
-we add it. Gnuastro itself if also already packaged in some package
-managers (for example Debian or Homebrew).
+Based on your package manager, you can use any of the following commands to 
install the mandatory and optional dependencies.
+If your package manager isn't included in the list below, please send us the 
respective command, so we add it.
+Gnuastro itself if also already packaged in some package managers (for example 
Debian or Homebrew).
 
-As discussed above, we recommend installing the @emph{mandatory}
-dependencies manually from source (see @ref{Mandatory
-dependencies}). Therefore, in each command below, first the optional
-dependencies are given. The mandatory dependencies are included after an
-empty line. If you would also like to install the mandatory dependencies
-with your package manager, just ignore the empty line.
+As discussed above, we recommend installing the @emph{mandatory} dependencies 
manually from source (see @ref{Mandatory dependencies}).
+Therefore, in each command below, first the optional dependencies are given.
+The mandatory dependencies are included after an empty line.
+If you would also like to install the mandatory dependencies with your package 
manager, just ignore the empty line.
 
-For better archivability and compression ratios, Gnuastro's recommended
-tarball compression format is with the
-@url{http://lzip.nongnu.org/lzip.html, Lzip} program, see @ref{Release
-tarball}. Therefore, the package manager commands below also contain Lzip.
+For better archivability and compression ratios, Gnuastro's recommended 
tarball compression format is with the @url{http://lzip.nongnu.org/lzip.html, 
Lzip} program, see @ref{Release tarball}.
+Therefore, the package manager commands below also contain Lzip.
 
 @table @asis
-@item @command{apt-get} (Debian-based OSs: Debian, Ubuntu, Linux Mint, and etc)
+@item @command{apt-get} (Debian-based OSs: Debian, Ubuntu, Linux Mint, etc)
 @cindex Debian
 @cindex Ubuntu
 @cindex Linux Mint
@@ -5617,14 +4245,12 @@ tarball}. Therefore, the package manager commands below 
also contain Lzip.
 @cindex Advanced Packaging Tool (APT, Debian)
 @url{https://en.wikipedia.org/wiki/Debian,Debian} is one of the oldest
 GNU/Linux
-distributions@footnote{@url{https://en.wikipedia.org/wiki/List_of_Linux_distributions#Debian-based}}.
 It
-thus has a very extended user community and a robust internal structure and
-standards. All of it is free software and based on the work of volunteers
-around the world. Many distributions are thus derived from it, for example
-Ubuntu and Linux Mint. This arguably makes Debian-based OSs the largest,
-and most used, class of GNU/Linux distributions. All of them use Debian's
-Advanced Packaging Tool (APT, for example @command{apt-get}) for managing
-packages.
+distributions@footnote{@url{https://en.wikipedia.org/wiki/List_of_Linux_distributions#Debian-based}}.
+It thus has a very extended user community and a robust internal structure and 
standards.
+All of it is free software and based on the work of volunteers around the 
world.
+Many distributions are thus derived from it, for example Ubuntu and Linux Mint.
+This arguably makes Debian-based OSs the largest, and most used, class of 
GNU/Linux distributions.
+All of them use Debian's Advanced Packaging Tool (APT, for example 
@command{apt-get}) for managing packages.
 @example
 $ sudo apt-get install ghostscript libtool-bin libjpeg-dev  \
                        libtiff-dev libgit2-dev lzip         \
@@ -5633,13 +4259,11 @@ $ sudo apt-get install ghostscript libtool-bin 
libjpeg-dev  \
 @end example
 
 @noindent
-Gnuastro is @url{https://tracker.debian.org/pkg/gnuastro,packaged} in
-Debian (and thus some of its derivate operating systems). Just make sure it
-is the most recent version.
-
+Gnuastro is @url{https://tracker.debian.org/pkg/gnuastro,packaged} in Debian 
(and thus some of its derivate operating systems).
+Just make sure it is the most recent version.
 
 @item @command{dnf}
-@itemx @command{yum} (Red Hat-based OSs: Red Hat, Fedora, CentOS, Scientific 
Linux, and etc)
+@itemx @command{yum} (Red Hat-based OSs: Red Hat, Fedora, CentOS, Scientific 
Linux, etc)
 @cindex RHEL
 @cindex Fedora
 @cindex CentOS
@@ -5647,14 +4271,11 @@ is the most recent version.
 @cindex @command{dnf}
 @cindex @command{yum}
 @cindex Scientific Linux
-@url{https://en.wikipedia.org/wiki/Red_Hat,Red Hat Enterprise Linux} (RHEL)
-is released by Red Hat Inc. RHEL requires paid subscriptions for use of its
-binaries and support. But since it is free software, many other teams use
-its code to spin-off their own distributions based on RHEL. Red Hat-based
-GNU/Linux distributions initially used the ``Yellowdog Updated, Modifier''
-(YUM) package manager, which has been replaced by ``Dandified yum''
-(DNF). If the latter isn't available on your system, you can use
-@command{yum} instead of @command{dnf} in the command below.
+@url{https://en.wikipedia.org/wiki/Red_Hat,Red Hat Enterprise Linux} (RHEL) is 
released by Red Hat Inc.
+RHEL requires paid subscriptions for use of its binaries and support.
+But since it is free software, many other teams use its code to spin-off their 
own distributions based on RHEL.
+Red Hat-based GNU/Linux distributions initially used the ``Yellowdog Updated, 
Modifier'' (YUM) package manager, which has been replaced by ``Dandified yum'' 
(DNF).
+If the latter isn't available on your system, you can use @command{yum} 
instead of @command{dnf} in the command below.
 @example
 $ sudo dnf install ghostscript libtool libjpeg-devel        \
                    libtiff-devel libgit2-devel lzip         \
@@ -5667,18 +4288,14 @@ $ sudo dnf install ghostscript libtool libjpeg-devel    
    \
 @cindex Homebrew
 @cindex MacPorts
 @cindex @command{brew}
-@url{https://en.wikipedia.org/wiki/MacOS,macOS} is the operating system
-used on Apple devices. macOS does not come with a package manager
-pre-installed, but several widely used, third-party package managers exist,
-such as Homebrew or MacPorts. Both are free software. Currently we have
-only tested Gnuastro's installation with Homebrew as described below.
-
-If not already installed, first obtain Homebrew by following the
-instructions at @url{https://brew.sh}. Homebrew manages packages in
-different `taps'. To install WCSLIB (discussed in @ref{Mandatory
-dependencies}) via Homebrew you will need to @command{tap} into
-@command{brewsci/science} first (the tap may change in the future, but can
-be found by calling @command{brew search wcslib}).
+@url{https://en.wikipedia.org/wiki/MacOS,macOS} is the operating system used 
on Apple devices.
+macOS does not come with a package manager pre-installed, but several widely 
used, third-party package managers exist, such as Homebrew or MacPorts.
+Both are free software.
+Currently we have only tested Gnuastro's installation with Homebrew as 
described below.
+
+If not already installed, first obtain Homebrew by following the instructions 
at @url{https://brew.sh}.
+Homebrew manages packages in different `taps'.
+To install WCSLIB (discussed in @ref{Mandatory dependencies}) via Homebrew you 
will need to @command{tap} into @command{brewsci/science} first (the tap may 
change in the future, but can be found by calling @command{brew search wcslib}).
 @example
 $ brew install ghostscript libtool libjpeg libtiff          \
                libgit2 lzip                                 \
@@ -5691,12 +4308,9 @@ $ brew install wcslib
 @item @command{pacman} (Arch Linux)
 @cindex Arch Linux
 @cindex @command{pacman}
-@url{https://en.wikipedia.org/wiki/Arch_Linux,Arch Linux} is a smaller
-GNU/Linux distribution, which follows the KISS principle (``keep it simple,
-stupid'') as a general guideline. It ``focuses on elegance, code
-correctness, minimalism and simplicity, and expects the user to be willing
-to make some effort to understand the system's operation''. Arch Linux uses
-``Package manager'' (Pacman) to manage its packages/components.
+@url{https://en.wikipedia.org/wiki/Arch_Linux,Arch Linux} is a smaller 
GNU/Linux distribution, which follows the KISS principle (``keep it simple, 
stupid'') as a general guideline.
+It ``focuses on elegance, code correctness, minimalism and simplicity, and 
expects the user to be willing to make some effort to understand the system's 
operation''.
+Arch Linux uses ``Package manager'' (Pacman) to manage its packages/components.
 @example
 $ sudo pacman -S ghostscript libtool libjpeg libtiff        \
                  libgit2 lzip                               \
@@ -5707,15 +4321,10 @@ $ sudo pacman -S ghostscript libtool libjpeg libtiff    
    \
 @item @command{zypper} (openSUSE and SUSE Linux Enterprise Server)
 @cindex openSUSE
 @cindex SUSE Linux Enterprise Server
-@cindex @command{zypper}
-@url{https://www.opensuse.org,openSUSE} is a community project supported by
-@url{https://www.suse.com,SUSE} with both stable and rolling releases.
-SUSE Linux Enterprise
-Server@footnote{@url{https://www.suse.com/products/server}} (SLES) is the
-commercial offering which shares code and tools. Many additional packages
-are offered in the Build
-Service@footnote{@url{https://build.opensuse.org}}. openSUSE and SLES use
-@command{zypper} (cli) and YaST (GUI) for managing repositories and
+@cindex @command{zypper} @url{https://www.opensuse.org,openSUSE} is a 
community project supported by @url{https://www.suse.com,SUSE} with both stable 
and rolling releases.
+SUSE Linux Enterprise 
Server@footnote{@url{https://www.suse.com/products/server}} (SLES) is the 
commercial offering which shares code and tools.
+Many additional packages are offered in the Build 
Service@footnote{@url{https://build.opensuse.org}}.
+openSUSE and SLES use @command{zypper} (cli) and YaST (GUI) for managing 
repositories and
 packages.
 
 @example
@@ -5725,8 +4334,7 @@ $ sudo zypper install ghostscript_any libtool pkgconfig   
 \
               wcslib-devel
 @end example
 @noindent
-When building Gnuastro, run the configure script with the following
-@code{CPPFLAGS} environment variable:
+When building Gnuastro, run the configure script with the following 
@code{CPPFLAGS} environment variable:
 
 @example
 $ ./configure CPPFLAGS="-I/usr/include/cfitsio"
@@ -5736,20 +4344,14 @@ $ ./configure CPPFLAGS="-I/usr/include/cfitsio"
 @c in @command{zypper}. Just make sure it is the most recent version.
 @end table
 
-Usually, when libraries are installed by operating system package managers,
-there should be no problems when configuring and building other programs
-from source (that depend on the libraries: Gnuastro in this case). However,
-in some special conditions, problems may pop-up during the configuration,
-building, or checking/running any of Gnuastro's programs. The most common
-of such problems and their solution are discussed below.
+Usually, when libraries are installed by operating system package managers, 
there should be no problems when configuring and building other programs from 
source (that depend on the libraries: Gnuastro in this case).
+However, in some special conditions, problems may pop-up during the 
configuration, building, or checking/running any of Gnuastro's programs.
+The most common of such problems and their solution are discussed below.
 
 @cartouche
 @noindent
-@strong{Not finding library during configuration:} If a library is
-installed, but during Gnuastro's @command{configure} step the library isn't
-found, then configure Gnuastro like the command below (correcting
-@file{/path/to/lib}). For more, see @ref{Known issues} and
-@ref{Installation directory}.
+@strong{Not finding library during configuration:} If a library is installed, 
but during Gnuastro's @command{configure} step the library isn't found, then 
configure Gnuastro like the command below (correcting @file{/path/to/lib}).
+For more, see @ref{Known issues} and @ref{Installation directory}.
 @example
 $ ./configure LDFLAGS="-L/path/to/lib"
 @end example
@@ -5757,11 +4359,8 @@ $ ./configure LDFLAGS="-L/path/to/lib"
 
 @cartouche
 @noindent
-@strong{Not finding header (.h) files while building:} If a library is
-installed, but during Gnuastro's @command{make} step, the library's header
-(file with a @file{.h} suffix) isn't found, then configure Gnuastro like
-the command below (correcting @file{/path/to/include}). For more, see
-@ref{Known issues} and @ref{Installation directory}.
+@strong{Not finding header (.h) files while building:} If a library is 
installed, but during Gnuastro's @command{make} step, the library's header 
(file with a @file{.h} suffix) isn't found, then configure Gnuastro like the 
command below (correcting @file{/path/to/include}).
+For more, see @ref{Known issues} and @ref{Installation directory}.
 @example
 $ ./configure CPPFLAGS="-I/path/to/include"
 @end example
@@ -5769,11 +4368,9 @@ $ ./configure CPPFLAGS="-I/path/to/include"
 
 @cartouche
 @noindent
-@strong{Gnuastro's programs don't run during check or after install:} If a
-library is installed, but the programs don't run due to linking problems,
-set the @code{LD_LIBRARY_PATH} variable like below (assuming Gnuastro is
-installed in @file{/path/to/installed}). For more, see @ref{Known issues}
-and @ref{Installation directory}.
+@strong{Gnuastro's programs don't run during check or after install:}
+If a library is installed, but the programs don't run due to linking problems, 
set the @code{LD_LIBRARY_PATH} variable like below (assuming Gnuastro is 
installed in @file{/path/to/installed}).
+For more, see @ref{Known issues} and @ref{Installation directory}.
 @example
 $ export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/path/to/installed/lib"
 @end example
@@ -5790,15 +4387,12 @@ $ export 
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/path/to/installed/lib"
 @node Downloading the source, Build and install, Dependencies, Installation
 @section Downloading the source
 
-Gnuastro's source code can be downloaded in two ways. As a tarball, ready
-to be configured and installed on your system (as described in @ref{Quick
-start}), see @ref{Release tarball}. If you want official releases of stable
-versions this is the best, easiest and most common option. Alternatively,
-you can clone the version controlled history of Gnuastro, run one extra
-bootstrapping step and then follow the same steps as the tarball. This will
-give you access to all the most recent work that will be included in the
-next release along with the full project history. The process is thoroughly
-introduced in @ref{Version controlled source}.
+Gnuastro's source code can be downloaded in two ways.
+As a tarball, ready to be configured and installed on your system (as 
described in @ref{Quick start}), see @ref{Release tarball}.
+If you want official releases of stable versions this is the best, easiest and 
most common option.
+Alternatively, you can clone the version controlled history of Gnuastro, run 
one extra bootstrapping step and then follow the same steps as the tarball.
+This will give you access to all the most recent work that will be included in 
the next release along with the full project history.
+The process is thoroughly introduced in @ref{Version controlled source}.
 
 
 
@@ -5810,59 +4404,43 @@ introduced in @ref{Version controlled source}.
 @node Release tarball, Version controlled source, Downloading the source, 
Downloading the source
 @subsection Release tarball
 
-A release tarball (commonly compressed) is the most common way of obtaining
-free and open source software. A tarball is a snapshot of one particular
-moment in the Gnuastro development history along with all the necessary
-files to configure, build, and install Gnuastro easily (see @ref{Quick
-start}). It is very straightforward and needs the least set of dependencies
-(see @ref{Mandatory dependencies}). Gnuastro has tarballs for official
-stable releases and pre-releases for testing. See @ref{Version numbering}
-for more on the two types of releases and the formats of the version
-numbers. The URLs for each type of release are given below.
+A release tarball (commonly compressed) is the most common way of obtaining 
free and open source software.
+A tarball is a snapshot of one particular moment in the Gnuastro development 
history along with all the necessary files to configure, build, and install 
Gnuastro easily (see @ref{Quick start}).
+It is very straightforward and needs the least set of dependencies (see 
@ref{Mandatory dependencies}).
+Gnuastro has tarballs for official stable releases and pre-releases for 
testing.
+See @ref{Version numbering} for more on the two types of releases and the 
formats of the version numbers.
+The URLs for each type of release are given below.
 
 @table @asis
 
 @item Official stable releases (@url{http://ftp.gnu.org/gnu/gnuastro}):
-This URL hosts the official stable releases of Gnuastro. Always use the
-most recent version (see @ref{Version numbering}). By clicking on the
-``Last modified'' title of the second column, the files will be sorted by
-their date which you can also use to find the latest version. It is
-recommended to use a mirror to download these tarballs, please visit
-@url{http://ftpmirror.gnu.org/gnuastro/} and see below.
+This URL hosts the official stable releases of Gnuastro.
+Always use the most recent version (see @ref{Version numbering}).
+By clicking on the ``Last modified'' title of the second column, the files 
will be sorted by their date which you can also use to find the latest version.
+It is recommended to use a mirror to download these tarballs, please visit 
@url{http://ftpmirror.gnu.org/gnuastro/} and see below.
 
 @item Pre-release tar-balls (@url{http://alpha.gnu.org/gnu/gnuastro}):
-This URL contains unofficial pre-release versions of Gnuastro. The
-pre-release versions of Gnuastro here are for enthusiasts to try out before
-an official release. If there are problems, or bugs then the testers will
-inform the developers to fix before the next official release. See
-@ref{Version numbering} to understand how the version numbers here are
-formatted. If you want to remain even more up-to-date with the developing
-activities, please clone the version controlled source as described in
-@ref{Version controlled source}.
+This URL contains unofficial pre-release versions of Gnuastro.
+The pre-release versions of Gnuastro here are for enthusiasts to try out 
before an official release.
+If there are problems, or bugs then the testers will inform the developers to 
fix before the next official release.
+See @ref{Version numbering} to understand how the version numbers here are 
formatted.
+If you want to remain even more up-to-date with the developing activities, 
please clone the version controlled source as described in @ref{Version 
controlled source}.
 
 @end table
 
 @cindex Gzip
 @cindex Lzip
-Gnuastro's official/stable tarball is released with two formats: Gzip (with
-suffix @file{.tar.gz}) and Lzip (with suffix @file{.tar.lz}). The
-pre-release tarballs (after version 0.3) are released only as an Lzip
-tarball. Gzip is a very well-known and widely used compression program
-created by GNU and available in most systems. However, Lzip provides a
-better compression ratio and more robust archival capacity. For example
-Gnuastro 0.3's tarball was 2.9MB and 4.3MB with Lzip and Gzip respectively,
-see the @url{http://www.nongnu.org/lzip/lzip.html, Lzip webpage} for
-more. Lzip might not be pre-installed in your operating system, if so,
-installing it from your operating system's package manager or from source
-is very easy and fast (it is a very small program).
-
-The GNU FTP server is mirrored (has backups) in various locations on the
-globe (@url{http://www.gnu.org/order/ftp.html}). You can use the closest
-mirror to your location for a more faster download. Note that only some
-mirrors keep track of the pre-release (alpha) tarballs. Also note that if
-you want to download immediately after and announcement (see
-@ref{Announcements}), the mirrors might need some time to synchronize with
-the main GNU FTP server.
+Gnuastro's official/stable tarball is released with two formats: Gzip (with 
suffix @file{.tar.gz}) and Lzip (with suffix @file{.tar.lz}).
+The pre-release tarballs (after version 0.3) are released only as an Lzip 
tarball.
+Gzip is a very well-known and widely used compression program created by GNU 
and available in most systems.
+However, Lzip provides a better compression ratio and more robust archival 
capacity.
+For example Gnuastro 0.3's tarball was 2.9MB and 4.3MB with Lzip and Gzip 
respectively, see the @url{http://www.nongnu.org/lzip/lzip.html, Lzip webpage} 
for more.
+Lzip might not be pre-installed in your operating system, if so, installing it 
from your operating system's package manager or from source is very easy and 
fast (it is a very small program).
+
+The GNU FTP server is mirrored (has backups) in various locations on the globe 
(@url{http://www.gnu.org/order/ftp.html}).
+You can use the closest mirror to your location for a more faster download.
+Note that only some mirrors keep track of the pre-release (alpha) tarballs.
+Also note that if you want to download immediately after and announcement (see 
@ref{Announcements}), the mirrors might need some time to synchronize with the 
main GNU FTP server.
 
 
 @node Version controlled source,  , Release tarball, Downloading the source
@@ -5870,28 +4448,16 @@ the main GNU FTP server.
 
 @cindex Git
 @cindex Version control
-The publicly distributed Gnuastro tar-ball (for example
-@file{gnuastro-X.X.tar.gz}) does not contain the revision history, it is
-only a snapshot of the source code at one significant instant of Gnuastro's
-history (specified by the version number, see @ref{Version numbering}),
-ready to be configured and built. To be able to develop successfully, the
-revision history of the code can be very useful to track when something was
-added or changed, also some updates that are not yet officially released
-might be in it.
-
-We use Git for the version control of Gnuastro. For those who are not
-familiar with it, we recommend the @url{https://git-scm.com/book/en, ProGit
-book}. The whole book is publicly available for online reading and
-downloading and does a wonderful job at explaining the concepts and best
-practices.
-
-Let's assume you want to keep Gnuastro in the @file{TOPGNUASTRO} directory
-(can be any directory, change the value below). The full version controlled
-history of Gnuastro can be cloned in @file{TOPGNUASTRO/gnuastro} by running
-the following commands@footnote{If your internet connection is active, but
-Git complains about the network, it might be due to your network setup not
-recognizing the Git protocol. In that case use the following URL which uses
-the HTTP protocol instead: @command{http://git.sv.gnu.org/r/gnuastro.git}}:
+The publicly distributed Gnuastro tar-ball (for example 
@file{gnuastro-X.X.tar.gz}) does not contain the revision history, it is only a 
snapshot of the source code at one significant instant of Gnuastro's history 
(specified by the version number, see @ref{Version numbering}), ready to be 
configured and built.
+To be able to develop successfully, the revision history of the code can be 
very useful to track when something was added or changed, also some updates 
that are not yet officially released might be in it.
+
+We use Git for the version control of Gnuastro.
+For those who are not familiar with it, we recommend the 
@url{https://git-scm.com/book/en, ProGit book}.
+The whole book is publicly available for online reading and downloading and 
does a wonderful job at explaining the concepts and best practices.
+
+Let's assume you want to keep Gnuastro in the @file{TOPGNUASTRO} directory 
(can be any directory, change the value below).
+The full version controlled history of Gnuastro can be cloned in 
@file{TOPGNUASTRO/gnuastro} by running the following commands@footnote{If your 
internet connection is active, but Git complains about the network, it might be 
due to your network setup not recognizing the Git protocol.
+In that case use the following URL which uses the HTTP protocol instead: 
@command{http://git.sv.gnu.org/r/gnuastro.git}}:
 
 @example
 $ TOPGNUASTRO=/home/yourname/Research/projects/
@@ -5900,28 +4466,18 @@ $ git clone git://git.sv.gnu.org/gnuastro.git
 @end example
 
 @noindent
-The @file{$TOPGNUASTRO/gnuastro} directory will contain hand-written
-(version controlled) source code for Gnuastro's programs, libraries, this
-book and the tests. All are divided into sub-directories with standard and
-very descriptive names. The version controlled files in the top cloned
-directory are either mainly in capital letters (for example @file{THANKS}
-and @file{README}) or mainly written in small-caps (for example
-@file{configure.ac} and @file{Makefile.am}). The former are
-non-programming, standard writing for human readers containing high-level
-information about the whole package. The latter are instructions to
-customize the GNU build system for Gnuastro. For more on Gnuastro's source
-code structure, please see @ref{Developing}. We won't go any deeper here.
+The @file{$TOPGNUASTRO/gnuastro} directory will contain hand-written (version 
controlled) source code for Gnuastro's programs, libraries, this book and the 
tests.
+All are divided into sub-directories with standard and very descriptive names.
+The version controlled files in the top cloned directory are either mainly in 
capital letters (for example @file{THANKS} and @file{README}) or mainly written 
in small-caps (for example @file{configure.ac} and @file{Makefile.am}).
+The former are non-programming, standard writing for human readers containing 
high-level information about the whole package.
+The latter are instructions to customize the GNU build system for Gnuastro.
+For more on Gnuastro's source code structure, please see @ref{Developing}.
+We won't go any deeper here.
 
-The cloned Gnuastro source cannot immediately be configured, compiled, or
-installed since it only contains hand-written files, not automatically
-generated or imported files which do all the hard work of the build
-process. See @ref{Bootstrapping} for the process of generating and
-importing those files (its not too hard!). Once you have bootstrapped
-Gnuastro, you can run the standard procedures (in @ref{Quick start}). Very
-soon after you have cloned it, Gnuastro's main @file{master} branch will be
-updated on the main repository (since the developers are actively working
-on Gnuastro), for the best practices in keeping your local history in sync
-with the main repository see @ref{Synchronizing}.
+The cloned Gnuastro source cannot immediately be configured, compiled, or 
installed since it only contains hand-written files, not automatically 
generated or imported files which do all the hard work of the build process.
+See @ref{Bootstrapping} for the process of generating and importing those 
files (its not too hard!).
+Once you have bootstrapped Gnuastro, you can run the standard procedures (in 
@ref{Quick start}).
+Very soon after you have cloned it, Gnuastro's main @file{master} branch will 
be updated on the main repository (since the developers are actively working on 
Gnuastro), for the best practices in keeping your local history in sync with 
the main repository see @ref{Synchronizing}.
 
 
 
@@ -5941,63 +4497,37 @@ with the main repository see @ref{Synchronizing}.
 @cindex GNU Portability Library (Gnulib)
 @cindex Automatically created build files
 @noindent
-The version controlled source code lacks the source files that we have not
-written or are automatically built. These automatically generated files are
-included in the distributed tar ball for each distribution (for example
-@file{gnuastro-X.X.tar.gz}, see @ref{Version numbering}) and make it easy
-to immediately configure, build, and install Gnuastro. However from the
-perspective of version control, they are just bloatware and sources of
-confusion (since they are not changed by Gnuastro developers).
-
-The process of automatically building and importing necessary files into
-the cloned directory is known as @emph{bootstrapping}. All the instructions
-for an automatic bootstrapping are available in @file{bootstrap} and
-configured using @file{bootstrap.conf}. @file{bootstrap} and @file{COPYING}
-(which contains the software copyright notice) are the only files not
-written by Gnuastro developers but under version control to enable simple
-bootstrapping and legal information on usage immediately after
-cloning. @file{bootstrap.conf} is maintained by the GNU Portability Library
-(Gnulib) and this file is an identical copy, so do not make any changes in
-this file since it will be replaced when Gnulib releases an update. Make
-all your changes in @file{bootstrap.conf}.
-
-The bootstrapping process has its own separate set of dependencies, the
-full list is given in @ref{Bootstrapping dependencies}. They are generally
-very low-level and used by a very large set of commonly used programs, so
-they are probably already installed on your system. The simplest way to
-bootstrap Gnuastro is to simply run the bootstrap script within your cloned
-Gnuastro directory as shown below. However, please read the next paragraph
-before doing so (see @ref{Version controlled source} for
-@file{TOPGNUASTRO}).
+The version controlled source code lacks the source files that we have not 
written or are automatically built.
+These automatically generated files are included in the distributed tar ball 
for each distribution (for example @file{gnuastro-X.X.tar.gz}, see @ref{Version 
numbering}) and make it easy to immediately configure, build, and install 
Gnuastro.
+However from the perspective of version control, they are just bloatware and 
sources of confusion (since they are not changed by Gnuastro developers).
+
+The process of automatically building and importing necessary files into the 
cloned directory is known as @emph{bootstrapping}.
+All the instructions for an automatic bootstrapping are available in 
@file{bootstrap} and configured using @file{bootstrap.conf}.
+@file{bootstrap} and @file{COPYING} (which contains the software copyright 
notice) are the only files not written by Gnuastro developers but under version 
control to enable simple bootstrapping and legal information on usage 
immediately after cloning.
+@file{bootstrap.conf} is maintained by the GNU Portability Library (Gnulib) 
and this file is an identical copy, so do not make any changes in this file 
since it will be replaced when Gnulib releases an update.
+Make all your changes in @file{bootstrap.conf}.
+
+The bootstrapping process has its own separate set of dependencies, the full 
list is given in @ref{Bootstrapping dependencies}.
+They are generally very low-level and used by a very large set of commonly 
used programs, so they are probably already installed on your system.
+The simplest way to bootstrap Gnuastro is to simply run the bootstrap script 
within your cloned Gnuastro directory as shown below.
+However, please read the next paragraph before doing so (see @ref{Version 
controlled source} for @file{TOPGNUASTRO}).
 
 @example
 $ cd TOPGNUASTRO/gnuastro
 $ ./bootstrap                      # Requires internet connection
 @end example
 
-Without any options, @file{bootstrap} will clone Gnulib within your cloned
-Gnuastro directory (@file{TOPGNUASTRO/gnuastro/gnulib}) and download the
-necessary Autoconf archives macros. So if you run bootstrap like this, you
-will need an internet connection every time you decide to bootstrap. Also,
-Gnulib is a large package and cloning it can be slow. It will also keep the
-full Gnulib repository within your Gnuastro repository, so if another one
-of your projects also needs Gnulib, and you insist on running bootstrap
-like this, you will have two copies. In case you regularly backup your
-important files, Gnulib will also slow down the backup process. Therefore
-while the simple invocation above can be used with no problem, it is not
-recommended. To do better, see the next paragraph.
-
-The recommended way to get these two packages is thoroughly discussed in
-@ref{Bootstrapping dependencies} (in short: clone them in the separate
-@file{DEVDIR/} directory). The following commands will take you into the
-cloned Gnuastro directory and run the @file{bootstrap} script, while
-telling it to copy some files (instead of making symbolic links, with the
-@option{--copy} option, this is not mandatory@footnote{The @option{--copy}
-option is recommended because some backup systems might do strange things
-with symbolic links.}) and where to look for Gnulib (with the
-@option{--gnulib-srcdir} option). Please note that the address given to
-@option{--gnulib-srcdir} has to be an absolute address (so don't use
-@file{~} or @file{../} for example).
+Without any options, @file{bootstrap} will clone Gnulib within your cloned 
Gnuastro directory (@file{TOPGNUASTRO/gnuastro/gnulib}) and download the 
necessary Autoconf archives macros.
+So if you run bootstrap like this, you will need an internet connection every 
time you decide to bootstrap.
+Also, Gnulib is a large package and cloning it can be slow.
+It will also keep the full Gnulib repository within your Gnuastro repository, 
so if another one of your projects also needs Gnulib, and you insist on running 
bootstrap like this, you will have two copies.
+In case you regularly backup your important files, Gnulib will also slow down 
the backup process.
+Therefore while the simple invocation above can be used with no problem, it is 
not recommended.
+To do better, see the next paragraph.
+
+The recommended way to get these two packages is thoroughly discussed in 
@ref{Bootstrapping dependencies} (in short: clone them in the separate 
@file{DEVDIR/} directory).
+The following commands will take you into the cloned Gnuastro directory and 
run the @file{bootstrap} script, while telling it to copy some files (instead 
of making symbolic links, with the @option{--copy} option, this is not 
mandatory@footnote{The @option{--copy} option is recommended because some 
backup systems might do strange things with symbolic links.}) and where to look 
for Gnulib (with the @option{--gnulib-srcdir} option).
+Please note that the address given to @option{--gnulib-srcdir} has to be an 
absolute address (so don't use @file{~} or @file{../} for example).
 
 @example
 $ cd $TOPGNUASTRO/gnuastro
@@ -6011,68 +4541,49 @@ $ ./bootstrap --copy --gnulib-srcdir=$DEVDIR/gnulib
 @cindex GNU Automake
 @cindex GNU C library
 @cindex GNU build system
-Since Gnulib and Autoconf archives are now available in your local
-directories, you don't need an internet connection every time you decide to
-remove all untracked files and redo the bootstrap (see box below). You can
-also use the same command on any other project that uses Gnulib. All the
-necessary GNU C library functions, Autoconf macros and Automake inputs are
-now available along with the book figures. The standard GNU build system
-(@ref{Quick start}) will do the rest of the job.
+Since Gnulib and Autoconf archives are now available in your local 
directories, you don't need an internet connection every time you decide to 
remove all untracked files and redo the bootstrap (see box below).
+You can also use the same command on any other project that uses Gnulib.
+All the necessary GNU C library functions, Autoconf macros and Automake inputs 
are now available along with the book figures.
+The standard GNU build system (@ref{Quick start}) will do the rest of the job.
 
 @cartouche
 @noindent
-@strong{Undoing the bootstrap:} During the development, it might happen
-that you want to remove all the automatically generated and imported
-files. In other words, you might want to reverse the bootstrap
-process. Fortunately Git has a good program for this job: @command{git
-clean}. Run the following command and every file that is not version
-controlled will be removed.
+@strong{Undoing the bootstrap:}
+During the development, it might happen that you want to remove all the 
automatically generated and imported files.
+In other words, you might want to reverse the bootstrap process.
+Fortunately Git has a good program for this job: @command{git clean}.
+Run the following command and every file that is not version controlled will 
be removed.
 
 @example
 git clean -fxd
 @end example
 
 @noindent
-It is best to commit any recent change before running this
-command. You might have created new files since the last commit and if
-they haven't been committed, they will all be gone forever (using
-@command{rm}). To get a list of the non-version controlled files
-instead of deleting them, add the @option{n} option to @command{git
-clean}, so it becomes @option{-fxdn}.
+It is best to commit any recent change before running this command.
+You might have created new files since the last commit and if they haven't 
been committed, they will all be gone forever (using @command{rm}).
+To get a list of the non-version controlled files instead of deleting them, 
add the @option{n} option to @command{git clean}, so it becomes @option{-fxdn}.
 @end cartouche
 
-Besides the @file{bootstrap} and @file{bootstrap.conf}, the
-@file{bootstrapped/} directory and @file{README-hacking} file are also
-related to the bootstrapping process. The former hosts all the imported
-(bootstrapped) directories. Thus, in the version controlled source, it only
-contains a @file{REAME} file, but in the distributed tar-ball it also
-contains sub-directories filled with all bootstrapped
-files. @file{README-hacking} contains a summary of the bootstrapping
-process discussed in this section. It is a necessary reference when you
-haven't built this book yet. It is thus not distributed in the Gnuastro
-tarball.
+Besides the @file{bootstrap} and @file{bootstrap.conf}, the 
@file{bootstrapped/} directory and @file{README-hacking} file are also related 
to the bootstrapping process.
+The former hosts all the imported (bootstrapped) directories.
+Thus, in the version controlled source, it only contains a @file{REAME} file, 
but in the distributed tar-ball it also contains sub-directories filled with 
all bootstrapped files.
+@file{README-hacking} contains a summary of the bootstrapping process 
discussed in this section.
+It is a necessary reference when you haven't built this book yet.
+It is thus not distributed in the Gnuastro tarball.
 
 
 @node Synchronizing,  , Bootstrapping, Version controlled source
 @subsubsection Synchronizing
 
-The bootstrapping script (see @ref{Bootstrapping}) is not regularly needed:
-you mainly need it after you have cloned Gnuastro (once) and whenever you
-want to re-import the files from Gnulib, or Autoconf
-archives@footnote{@url{https://savannah.gnu.org/task/index.php?13993} is
-defined for you to check if significant (for Gnuastro) updates are made in
-these repositories, since the last time you pulled from them.} (not too
-common). However, Gnuastro developers are constantly working on Gnuastro
-and are pushing their changes to the official repository. Therefore, your
-local Gnuastro clone will soon be out-dated. Gnuastro has two mailing lists
-dedicated to its developing activities (see @ref{Developing mailing
-lists}).  Subscribing to them can help you decide when to synchronize with
-the official repository.
-
-To pull all the most recent work in Gnuastro, run the following command
-from the top Gnuastro directory. If you don't already have a built system,
-ignore @command{make distclean}. The separate steps are described in detail
-afterwards.
+The bootstrapping script (see @ref{Bootstrapping}) is not regularly needed: 
you mainly need it after you have cloned Gnuastro (once) and whenever you want 
to re-import the files from Gnulib, or Autoconf 
archives@footnote{@url{https://savannah.gnu.org/task/index.php?13993} is 
defined for you to check if significant (for Gnuastro) updates are made in 
these repositories, since the last time you pulled from them.} (not too common).
+However, Gnuastro developers are constantly working on Gnuastro and are 
pushing their changes to the official repository.
+Therefore, your local Gnuastro clone will soon be out-dated.
+Gnuastro has two mailing lists dedicated to its developing activities (see 
@ref{Developing mailing lists}).
+Subscribing to them can help you decide when to synchronize with the official 
repository.
+
+To pull all the most recent work in Gnuastro, run the following command from 
the top Gnuastro directory.
+If you don't already have a built system, ignore @command{make distclean}.
+The separate steps are described in detail afterwards.
 
 @example
 $ make distclean && git pull && autoreconf -f
@@ -6090,40 +4601,25 @@ $ autoreconf -f
 @cindex GNU Autoconf
 @cindex Mailing list: info-gnuastro
 @cindex @code{info-gnuastro@@gnu.org}
-If Gnuastro was already built in this directory, you don't want some
-outputs from the previous version being mixed with outputs from the newly
-pulled work. Therefore, the first step is to clean/delete all the built
-files with @command{make distclean}. Fortunately the GNU build system
-allows the separation of source and built files (in separate
-directories). This is a great feature to keep your source directory clean
-and you can use it to avoid the cleaning step. Gnuastro comes with a script
-with some useful options for this job. It is useful if you regularly pull
-recent changes, see @ref{Separate build and source directories}.
-
-After the pull, we must re-configure Gnuastro with @command{autoreconf -f}
-(part of GNU Autoconf). It will update the @file{./configure} script and
-all the @file{Makefile.in}@footnote{In the GNU build system,
-@command{./configure} will use the @file{Makefile.in} files to create the
-necessary @file{Makefile} files that are later read by @command{make} to
-build the package.} files based on the hand-written configurations (in
-@file{configure.ac} and the @file{Makefile.am} files). After running
-@command{autoreconf -f}, a warning about @code{TEXI2DVI} might show up, you
-can ignore that.
-
-The most important reason for re-building Gnuastro's build system is to
-generate/update the version number for your updated Gnuastro snapshot. This
-generated version number will include the commit information (see
-@ref{Version numbering}). The version number is included in nearly all
-outputs of Gnuastro's programs, therefore it is vital for reproducing an
-old result.
-
-As a summary, be sure to run `@command{autoreconf -f}' after every change
-in the Git history. This includes synchronization with the main server or
-even a commit you have made yourself.
-
-If you would like to see what has changed since you last synchronized your
-local clone, you can take the following steps instead of the simple command
-above (don't type anything after @code{#}):
+If Gnuastro was already built in this directory, you don't want some outputs 
from the previous version being mixed with outputs from the newly pulled work.
+Therefore, the first step is to clean/delete all the built files with 
@command{make distclean}.
+Fortunately the GNU build system allows the separation of source and built 
files (in separate directories).
+This is a great feature to keep your source directory clean and you can use it 
to avoid the cleaning step.
+Gnuastro comes with a script with some useful options for this job.
+It is useful if you regularly pull recent changes, see @ref{Separate build and 
source directories}.
+
+After the pull, we must re-configure Gnuastro with @command{autoreconf -f} 
(part of GNU Autoconf).
+It will update the @file{./configure} script and all the 
@file{Makefile.in}@footnote{In the GNU build system, @command{./configure} will 
use the @file{Makefile.in} files to create the necessary @file{Makefile} files 
that are later read by @command{make} to build the package.} files based on the 
hand-written configurations (in @file{configure.ac} and the @file{Makefile.am} 
files).
+After running @command{autoreconf -f}, a warning about @code{TEXI2DVI} might 
show up, you can ignore that.
+
+The most important reason for re-building Gnuastro's build system is to 
generate/update the version number for your updated Gnuastro snapshot.
+This generated version number will include the commit information (see 
@ref{Version numbering}).
+The version number is included in nearly all outputs of Gnuastro's programs, 
therefore it is vital for reproducing an old result.
+
+As a summary, be sure to run `@command{autoreconf -f}' after every change in 
the Git history.
+This includes synchronization with the main server or even a commit you have 
made yourself.
+
+If you would like to see what has changed since you last synchronized your 
local clone, you can take the following steps instead of the simple command 
above (don't type anything after @code{#}):
 
 @example
 $ git checkout master             # Confirm if you are on master.
@@ -6134,22 +4630,14 @@ $ autoreconf -f                   # Update the build 
system.
 @end example
 
 @noindent
-By default @command{git log} prints the most recent commit first, add the
-@option{--reverse} option to see the changes chronologically. To see
-exactly what has been changed in the source code along with the commit
-message, add a @option{-p} option to the @command{git log}.
+By default @command{git log} prints the most recent commit first, add the 
@option{--reverse} option to see the changes chronologically.
+To see exactly what has been changed in the source code along with the commit 
message, add a @option{-p} option to the @command{git log}.
 
-If you want to make changes in the code, have a look at @ref{Developing} to
-get started easily. Be sure to commit your changes in a separate branch
-(keep your @code{master} branch to follow the official repository) and
-re-run @command{autoreconf -f} after the commit. If you intend to send your
-work to us, you can safely use your commit since it will be ultimately
-recorded in Gnuastro's official history. If not, please upload your
-separate branch to a public hosting service, for example
-@url{https://gitlab.com, GitLab}, and link to it in your
-report/paper. Alternatively, run @command{make distcheck} and upload the
-output @file{gnuastro-X.X.X.XXXX.tar.gz} to a publicly accessible webpage
-so your results can be considered scientific (reproducible) later.
+If you want to make changes in the code, have a look at @ref{Developing} to 
get started easily.
+Be sure to commit your changes in a separate branch (keep your @code{master} 
branch to follow the official repository) and re-run @command{autoreconf -f} 
after the commit.
+If you intend to send your work to us, you can safely use your commit since it 
will be ultimately recorded in Gnuastro's official history.
+If not, please upload your separate branch to a public hosting service, for 
example @url{https://gitlab.com, GitLab}, and link to it in your report/paper.
+Alternatively, run @command{make distcheck} and upload the output 
@file{gnuastro-X.X.X.XXXX.tar.gz} to a publicly accessible webpage so your 
results can be considered scientific (reproducible) later.
 
 
 
@@ -6166,25 +4654,12 @@ so your results can be considered scientific 
(reproducible) later.
 @node Build and install,  , Downloading the source, Installation
 @section Build and install
 
-This section is basically a longer explanation to the sequence of commands
-given in @ref{Quick start}. If you didn't have any problems during the
-@ref{Quick start} steps, you want to have all the programs of Gnuastro
-installed in your system, you don't want to change the executable names
-during or after installation, you have root access to install the programs
-in the default system wide directory, the Letter paper size of the print
-book is fine for you or as a summary you don't feel like going into the
-details when everything is working, you can safely skip this section.
-
-If you have any of the above problems or you want to understand the details
-for a better control over your build and install, read along. The
-dependencies which you will need prior to configuring, building and
-installing Gnuastro are explained in @ref{Dependencies}. The first three
-steps in @ref{Quick start} need no extra explanation, so we will skip them
-and start with an explanation of Gnuastro specific configuration options
-and a discussion on the installation directory in @ref{Configuring},
-followed by some smaller subsections: @ref{Tests}, @ref{A4 print book}, and
-@ref{Known issues} which explains the solutions to known problems you might
-encounter in the installation steps and ways you can solve them.
+This section is basically a longer explanation to the sequence of commands 
given in @ref{Quick start}.
+If you didn't have any problems during the @ref{Quick start} steps, you want 
to have all the programs of Gnuastro installed in your system, you don't want 
to change the executable names during or after installation, you have root 
access to install the programs in the default system wide directory, the Letter 
paper size of the print book is fine for you or as a summary you don't feel 
like going into the details when everything is working, you can safely skip 
this section.
+
+If you have any of the above problems or you want to understand the details 
for a better control over your build and install, read along.
+The dependencies which you will need prior to configuring, building and 
installing Gnuastro are explained in @ref{Dependencies}.
+The first three steps in @ref{Quick start} need no extra explanation, so we 
will skip them and start with an explanation of Gnuastro specific configuration 
options and a discussion on the installation directory in @ref{Configuring}, 
followed by some smaller subsections: @ref{Tests}, @ref{A4 print book}, and 
@ref{Known issues} which explains the solutions to known problems you might 
encounter in the installation steps and ways you can solve them.
 
 
 @menu
@@ -6204,21 +4679,16 @@ encounter in the installation steps and ways you can 
solve them.
 
 @pindex ./configure
 @cindex Configuring
-The @command{$ ./configure} step is the most important step in the
-build and install process. All the required packages, libraries,
-headers and environment variables are checked in this step. The
-behaviors of make and make install can also be set through command
-line options to this command.
+The @command{$ ./configure} step is the most important step in the build and 
install process.
+All the required packages, libraries, headers and environment variables are 
checked in this step.
+The behaviors of make and make install can also be set through command line 
options to this command.
 
 @cindex Configure options
 @cindex Customizing installation
 @cindex Installation, customizing
-The configure script accepts various arguments and options which
-enable the final user to highly customize whatever she is
-building. The options to configure are generally very similar to
-normal program options explained in @ref{Arguments and
-options}. Similar to all GNU programs, you can get a full list of the
-options along with a short explanation by running
+The configure script accepts various arguments and options which enable the 
final user to highly customize whatever she is building.
+The options to configure are generally very similar to normal program options 
explained in @ref{Arguments and options}.
+Similar to all GNU programs, you can get a full list of the options along with 
a short explanation by running
 
 @example
 $ ./configure --help
@@ -6226,14 +4696,10 @@ $ ./configure --help
 
 @noindent
 @cindex GNU Autoconf
-A complete explanation is also included in the @file{INSTALL} file. Note
-that this file was written by the authors of GNU Autoconf (which builds the
-@file{configure} script), therefore it is common for all programs which use
-the @command{$ ./configure} script for building and installing, not just
-Gnuastro. Here we only discuss cases where you don't have super-user access
-to the system and if you want to change the executable names. But before
-that, a review of the options to configure that are particular to Gnuastro
-are discussed.
+A complete explanation is also included in the @file{INSTALL} file.
+Note that this file was written by the authors of GNU Autoconf (which builds 
the @file{configure} script), therefore it is common for all programs which use 
the @command{$ ./configure} script for building and installing, not just 
Gnuastro.
+Here we only discuss cases where you don't have super-user access to the 
system and if you want to change the executable names.
+But before that, a review of the options to configure that are particular to 
Gnuastro are discussed.
 
 @menu
 * Gnuastro configure options::  Configure options particular to Gnuastro.
@@ -6247,11 +4713,9 @@ are discussed.
 
 @cindex @command{./configure} options
 @cindex Configure options particular to Gnuastro
-Most of the options to configure (which are to do with building) are
-similar for every program which uses this script. Here the options
-that are particular to Gnuastro are discussed. The next topics explain
-the usage of other configure options which can be applied to any
-program using the GNU build system (through the configure script).
+Most of the options to configure (which are to do with building) are similar 
for every program which uses this script.
+Here the options that are particular to Gnuastro are discussed.
+The next topics explain the usage of other configure options which can be 
applied to any program using the GNU build system (through the configure 
script).
 
 @vtable @option
 
@@ -6259,111 +4723,102 @@ program using the GNU build system (through the 
configure script).
 @cindex Valgrind
 @cindex Debugging
 @cindex GNU Debugger
-Compile/build Gnuastro with debugging information, no optimization and
-without shared libraries.
-
-In order to allow more efficient programs when using Gnuastro (after the
-installation), by default Gnuastro is built with a 3rd level (a very high
-level) optimization and no debugging information. By default, libraries are
-also built for static @emph{and} shared linking (see
-@ref{Linking}). However, when there are crashes or unexpected behavior,
-these three features can hinder the process of localizing the problem. This
-configuration option is identical to manually calling the configuration
-script with @code{CFLAGS="-g -O0" --disable-shared}.
-
-In the (rare) situations where you need to do your debugging on the shared
-libraries, don't use this option. Instead run the configure script by
-explicitly setting @code{CFLAGS} like this:
+Compile/build Gnuastro with debugging information, no optimization and without 
shared libraries.
+
+In order to allow more efficient programs when using Gnuastro (after the 
installation), by default Gnuastro is built with a 3rd level (a very high 
level) optimization and no debugging information.
+By default, libraries are also built for static @emph{and} shared linking (see 
@ref{Linking}).
+However, when there are crashes or unexpected behavior, these three features 
can hinder the process of localizing the problem.
+This configuration option is identical to manually calling the configuration 
script with @code{CFLAGS="-g -O0" --disable-shared}.
+
+In the (rare) situations where you need to do your debugging on the shared 
libraries, don't use this option.
+Instead run the configure script by explicitly setting @code{CFLAGS} like this:
 @example
 $ ./configure CFLAGS="-g -O0"
 @end example
 
 @item --enable-check-with-valgrind
 @cindex Valgrind
-Do the @command{make check} tests through Valgrind. Therefore, if any
-crashes or memory-related issues (segmentation faults in particular) occur
-in the tests, the output of Valgrind will also be put in the
-@file{tests/test-suite.log} file without having to manually modify the
-check scripts. This option will also activate Gnuastro's debug mode (see
-the @option{--enable-debug} configure-time option described above).
-
-Valgrind is free software. It is a program for easy checking of
-memory-related issues in programs. It runs a program within its own
-controlled environment and can thus identify the exact line-number in the
-program's source where a memory-related issue occurs. However, it can
-significantly slow-down the tests. So this option is only useful when a
-segmentation fault is found during @command{make check}.
+Do the @command{make check} tests through Valgrind.
+Therefore, if any crashes or memory-related issues (segmentation faults in 
particular) occur in the tests, the output of Valgrind will also be put in the 
@file{tests/test-suite.log} file without having to manually modify the check 
scripts.
+This option will also activate Gnuastro's debug mode (see the 
@option{--enable-debug} configure-time option described above).
+
+Valgrind is free software.
+It is a program for easy checking of memory-related issues in programs.
+It runs a program within its own controlled environment and can thus identify 
the exact line-number in the program's source where a memory-related issue 
occurs.
+However, it can significantly slow-down the tests.
+So this option is only useful when a segmentation fault is found during 
@command{make check}.
 
 @item --enable-progname
-Only build and install @file{progname} along with any other program that is
-enabled in this fashion. @file{progname} is the name of the executable
-without the @file{ast}, for example @file{crop} for Crop (with the
-executable name of @file{astcrop}).
+Only build and install @file{progname} along with any other program that is 
enabled in this fashion.
+@file{progname} is the name of the executable without the @file{ast}, for 
example @file{crop} for Crop (with the executable name of @file{astcrop}).
 
-Note that by default all the programs will be installed. This option (and
-the @option{--disable-progname} options) are only relevant when you don't
-want to install all the programs. Therefore, if this option is called for
-any of the programs in Gnuastro, any program which is not explicitly
-enabled will not be built or installed.
+Note that by default all the programs will be installed.
+This option (and the @option{--disable-progname} options) are only relevant 
when you don't want to install all the programs.
+Therefore, if this option is called for any of the programs in Gnuastro, any 
program which is not explicitly enabled will not be built or installed.
 
 @item --disable-progname
 @itemx --enable-progname=no
-Do not build or install the program named @file{progname}. This is
-very similar to the @option{--enable-progname}, but will build and
-install all the other programs except this one.
+Do not build or install the program named @file{progname}.
+This is very similar to the @option{--enable-progname}, but will build and 
install all the other programs except this one.
+
+@cartouche
+@noindent
+@strong{Note:} If some programs are enabled and some are disabled, it is 
equivalent to simply enabling those that were enabled.
+Listing the disabled programs is redundant.
+@end cartouche
 
 @item --enable-gnulibcheck
 @cindex GNU C library
 @cindex Gnulib: GNU Portability Library
 @cindex GNU Portability Library (Gnulib)
-Enable checks on the GNU Portability Library (Gnulib). Gnulib is used
-by Gnuastro to enable users of non-GNU based operating systems (that
-don't use GNU C library or glibc) to compile and use the advanced
-features that this library provides. We make extensive use of such
-functions. If you give this option to @command{$ ./configure}, when
-you run @command{$ make check}, first the functions in Gnulib will be
-tested, then the Gnuastro executables. If your operating system does
-not support glibc or has an older version of it and you have problems
-in the build process (@command{$ make}), you can give this flag to
-configure to see if the problem is caused by Gnulib not supporting
-your operating system or Gnuastro, see @ref{Known issues}.
+Enable checks on the GNU Portability Library (Gnulib).
+Gnulib is used by Gnuastro to enable users of non-GNU based operating systems 
(that don't use GNU C library or glibc) to compile and use the advanced 
features that this library provides.
+We make extensive use of such functions.
+If you give this option to @command{$ ./configure}, when you run @command{$ 
make check}, first the functions in Gnulib will be tested, then the Gnuastro 
executables.
+If your operating system does not support glibc or has an older version of it 
and you have problems in the build process (@command{$ make}), you can give 
this flag to configure to see if the problem is caused by Gnulib not supporting 
your operating system or Gnuastro, see @ref{Known issues}.
 
 @item --disable-guide-message
 @itemx --enable-guide-message=no
-Do not print a guiding message during the GNU Build process of @ref{Quick
-start}. By default, after each step, a message is printed guiding the user
-what the next command should be. Therefore, after @command{./configure}, it
-will suggest running @command{make}. After @command{make}, it will suggest
-running @command{make check} and so on. If Gnuastro is configured with this
-option, for example
+Do not print a guiding message during the GNU Build process of @ref{Quick 
start}.
+By default, after each step, a message is printed guiding the user what the 
next command should be.
+Therefore, after @command{./configure}, it will suggest running @command{make}.
+After @command{make}, it will suggest running @command{make check} and so on.
+If Gnuastro is configured with this option, for example
 @example
 $ ./configure --disable-guide-message
 @end example
-Then these messages will not be printed after any step (like most
-programs). For people who are not yet fully accustomed to this build
-system, these guidelines can be very useful and encouraging. However, if
-you find those messages annoying, use this option.
+Then these messages will not be printed after any step (like most programs).
+For people who are not yet fully accustomed to this build system, these 
guidelines can be very useful and encouraging.
+However, if you find those messages annoying, use this option.
 
+@item --without-libgit2
+@cindex Git
+@pindex libgit2
+@cindex Version control systems
+Build Gnuastro without libgit2 (for including Git commit hashes in output 
files), see @ref{Optional dependencies}.
+libgit2 is an optional dependency, with this option, Gnuastro will ignore any 
possibly existing libgit2 that may already be on the system.
 
-@end vtable
+@item --without-libjpeg
+@pindex libjpeg
+@cindex JPEG format
+Build Gnuastro without libjpeg (for reading/writing to JPEG files), see 
@ref{Optional dependencies}.
+libjpeg is an optional dependency, with this option, Gnuastro will ignore any 
possibly existing libjpeg that may already be on the system.
 
-@cartouche
-@noindent
-@strong{Note:} If some programs are enabled and some are disabled, it
-is equivalent to simply enabling those that were enabled. Listing the
-disabled programs is redundant.
-@end cartouche
+@item --without-libtiff
+@pindex libtiff
+@cindex TIFF format
+Build Gnuastro without libtiff (for reading/writing to TIFF files), see 
@ref{Optional dependencies}.
+libtiff is an optional dependency, with this option, Gnuastro will ignore any 
possibly existing libtiff that may already be on the system.
+
+@end vtable
 
-The tests of some programs might depend on the outputs of the tests of
-other programs. For example MakeProfiles is one the first programs to be
-tested when you run @command{$ make check}. MakeProfiles' test outputs
-(FITS images) are inputs to many other programs (which in turn provide
-inputs for other programs). Therefore, if you don't install MakeProfiles
-for example, the tests for many the other programs will be skipped. To
-avoid this, in one run, you can install all the programs and run the tests
-but not install. If everything is working correctly, you can run configure
-again with only the programs you want. However, don't run the tests and
-directly install after building.
+The tests of some programs might depend on the outputs of the tests of other 
programs.
+For example MakeProfiles is one the first programs to be tested when you run 
@command{$ make check}.
+MakeProfiles' test outputs (FITS images) are inputs to many other programs 
(which in turn provide inputs for other programs).
+Therefore, if you don't install MakeProfiles for example, the tests for many 
the other programs will be skipped.
+To avoid this, in one run, you can install all the programs and run the tests 
but not install.
+If everything is working correctly, you can run configure again with only the 
programs you want.
+However, don't run the tests and directly install after building.
 
 
 
@@ -6375,35 +4830,18 @@ directly install after building.
 @cindex Root access, not possible
 @cindex No access to super-user install
 @cindex Install with no super-user access
-One of the most commonly used options to @file{./configure} is
-@option{--prefix}, it is used to define the directory that will host all
-the installed files (or the ``prefix'' in their final absolute file
-name). For example, when you are using a server and you don't have
-administrator or root access. In this example scenario, if you don't use
-the @option{--prefix} option, you won't be able to install the built files
-and thus access them from anywhere without having to worry about where they
-are installed. However, once you prepare your startup file to look into the
-proper place (as discussed thoroughly below), you will be able to easily
-use this option and benefit from any software you want to install without
-having to ask the system administrators or install and use a different
-version of a software that is already installed on the server.
-
-The most basic way to run an executable is to explicitly write its full
-file name (including all the directory information) and run it. One example
-is running the configuration script with the @command{$ ./configure}
-command (see @ref{Quick start}). By giving a specific directory (the
-current directory or @file{./}), we are explicitly telling the shell to
-look in the current directory for an executable file named
-`@file{configure}'. Directly specifying the directory is thus useful for
-executables in the current (or nearby) directories. However, when the
-program (an executable file) is to be used a lot, specifying all those
-directories will become a significant burden. For example, the @file{ls}
-executable lists the contents in a given directory and it is (usually)
-installed in the @file{/usr/bin/} directory by the operating system
-maintainers. Therefore, if using the full address was the only way to
-access an executable, each time you wanted a listing of a directory, you
-would have to run the following command (which is very inconvenient, both
-in writing and in remembering the various directories).
+One of the most commonly used options to @file{./configure} is 
@option{--prefix}, it is used to define the directory that will host all the 
installed files (or the ``prefix'' in their final absolute file name).
+For example, when you are using a server and you don't have administrator or 
root access.
+In this example scenario, if you don't use the @option{--prefix} option, you 
won't be able to install the built files and thus access them from anywhere 
without having to worry about where they are installed.
+However, once you prepare your startup file to look into the proper place (as 
discussed thoroughly below), you will be able to easily use this option and 
benefit from any software you want to install without having to ask the system 
administrators or install and use a different version of a software that is 
already installed on the server.
+
+The most basic way to run an executable is to explicitly write its full file 
name (including all the directory information) and run it.
+One example is running the configuration script with the @command{$ 
./configure} command (see @ref{Quick start}).
+By giving a specific directory (the current directory or @file{./}), we are 
explicitly telling the shell to look in the current directory for an executable 
file named `@file{configure}'.
+Directly specifying the directory is thus useful for executables in the 
current (or nearby) directories.
+However, when the program (an executable file) is to be used a lot, specifying 
all those directories will become a significant burden.
+For example, the @file{ls} executable lists the contents in a given directory 
and it is (usually) installed in the @file{/usr/bin/} directory by the 
operating system maintainers.
+Therefore, if using the full address was the only way to access an executable, 
each time you wanted a listing of a directory, you would have to run the 
following command (which is very inconvenient, both in writing and in 
remembering the various directories).
 
 @example
 $ /usr/bin/ls
@@ -6411,21 +4849,18 @@ $ /usr/bin/ls
 
 @cindex Shell variables
 @cindex Environment variables
-To address this problem, we have the @file{PATH} environment variable. To
-understand it better, we will start with a short introduction to the shell
-variables. Shell variable values are basically treated as strings of
-characters. For example, it doesn't matter if the value is a name (string
-of @emph{alphabetic} characters), or a number (string of @emph{numeric}
-characters), or both. You can define a variable and a value for it by
-running
+To address this problem, we have the @file{PATH} environment variable.
+To understand it better, we will start with a short introduction to the shell 
variables.
+Shell variable values are basically treated as strings of characters.
+For example, it doesn't matter if the value is a name (string of 
@emph{alphabetic} characters), or a number (string of @emph{numeric} 
characters), or both.
+You can define a variable and a value for it by running
 @example
 $ myvariable1=a_test_value
 $ myvariable2="a test value"
 @end example
 @noindent
-As you see above, if the value contains white space characters, you have to
-put the whole value (including white space characters) in double quotes
-(@key{"}). You can see the value it represents by running
+As you see above, if the value contains white space characters, you have to 
put the whole value (including white space characters) in double quotes 
(@key{"}).
+You can see the value it represents by running
 @example
 $ echo $myvariable1
 $ echo $myvariable2
@@ -6433,19 +4868,13 @@ $ echo $myvariable2
 @noindent
 @cindex Environment
 @cindex Environment variables
-If a variable has no value or it wasn't defined, the last command will only
-print an empty line. A variable defined like this will be known as long as
-this shell or terminal is running. Other terminals will have no idea it
-existed.  The main advantage of shell variables is that if they are
-exported@footnote{By running @command{$ export myvariable=a_test_value}
-instead of the simpler case in the text}, subsequent programs that are run
-within that shell can access their value. So by changing their value, you
-can change the ``environment'' of a program which uses them. The shell
-variables which are accessed by programs are therefore known as
-``environment variables''@footnote{You can use shell variables for other
-actions too, for example to temporarily keep some names or run loops on
-some files.}. You can see the full list of exported variables that your
-shell recognizes by running:
+If a variable has no value or it wasn't defined, the last command will only 
print an empty line.
+A variable defined like this will be known as long as this shell or terminal 
is running.
+Other terminals will have no idea it existed.
+The main advantage of shell variables is that if they are exported@footnote{By 
running @command{$ export myvariable=a_test_value} instead of the simpler case 
in the text}, subsequent programs that are run within that shell can access 
their value.
+So by changing their value, you can change the ``environment'' of a program 
which uses them.
+The shell variables which are accessed by programs are therefore known as 
``environment variables''@footnote{You can use shell variables for other 
actions too, for example to temporarily keep some names or run loops on some 
files.}.
+You can see the full list of exported variables that your shell recognizes by 
running:
 
 @example
 $ printenv
@@ -6454,58 +4883,36 @@ $ printenv
 @cindex @file{HOME}
 @cindex @file{HOME/.local/}
 @cindex Environment variable, @code{HOME}
-@file{HOME} is one commonly used environment variable, it is any user's
-(the one that is logged in) top directory. Try finding it in the command
-above. It is used so often that the shell has a special expansion
-(alternative) for it: `@file{~}'. Whenever you see file names starting with
-the tilde sign, it actually represents the value to the @file{HOME}
-environment variable, so @file{~/doc} is the same as @file{$HOME/doc}.
+@file{HOME} is one commonly used environment variable, it is any user's (the 
one that is logged in) top directory.
+Try finding it in the command above.
+It is used so often that the shell has a special expansion (alternative) for 
it: `@file{~}'.
+Whenever you see file names starting with the tilde sign, it actually 
represents the value to the @file{HOME} environment variable, so @file{~/doc} 
is the same as @file{$HOME/doc}.
 
 @vindex PATH
 @pindex ./configure
 @cindex Setting @code{PATH}
 @cindex Default executable search directory
 @cindex Search directory for executables
-Another one of the most commonly used environment variables is @file{PATH},
-it is a list of directories to search for executable names.  Its value is a
-list of directories (separated by a colon, or `@key{:}'). When the address
-of the executable is not explicitly given (like @file{./configure} above),
-the system will look for the executable in the directories specified by
-@file{PATH}. If you have a computer nearby, try running the following
-command to see which directories your system will look into when it is
-searching for executable (binary) files, one example is printed here
-(notice how @file{/usr/bin}, in the @file{ls} example above, is one of the
-directories in @command{PATH}):
+Another one of the most commonly used environment variables is @file{PATH}, it 
is a list of directories to search for executable names.
+Its value is a list of directories (separated by a colon, or `@key{:}').
+When the address of the executable is not explicitly given (like 
@file{./configure} above), the system will look for the executable in the 
directories specified by @file{PATH}.
+If you have a computer nearby, try running the following command to see which 
directories your system will look into when it is searching for executable 
(binary) files, one example is printed here (notice how @file{/usr/bin}, in the 
@file{ls} example above, is one of the directories in @command{PATH}):
 
 @example
 $ echo $PATH
 /usr/local/sbin:/usr/local/bin:/usr/bin
 @end example
 
-By default @file{PATH} usually contains system-wide directories, which are
-readable (but not writable) by all users, like the above example. Therefore
-if you don't have root (or administrator) access, you need to add another
-directory to @file{PATH} which you actually have write access to. The
-standard directory where you can keep installed files (not just
-executables) for your own user is the @file{~/.local/} directory. The names
-of hidden files start with a `@key{.}' (dot), so it will not show up in
-your common command-line listings, or on the graphical user interface. You
-can use any other directory, but this is the most recognized.
-
-The top installation directory will be used to keep all the package's
-components: programs (executables), libraries, include (header) files,
-shared data (like manuals), or configuration files (see @ref{Review of
-library fundamentals} for a thorough introduction to headers and
-linking). So it commonly has some of the following sub-directories for each
-class of installed components respectively: @file{bin/}, @file{lib/},
-@file{include/} @file{man/}, @file{share/}, @file{etc/}. Since the
-@file{PATH} variable is only used for executables, you can add the
-@file{~/.local/bin} directory (which keeps the executables/programs or more
-generally, ``binary'' files) to @file{PATH} with the following command. As
-defined below, first the existing value of @file{PATH} is used, then your
-given directory is added to its end and the combined value is put back in
-@file{PATH} (run `@command{$ echo $PATH}' afterwards to check if it was
-added).
+By default @file{PATH} usually contains system-wide directories, which are 
readable (but not writable) by all users, like the above example.
+Therefore if you don't have root (or administrator) access, you need to add 
another directory to @file{PATH} which you actually have write access to.
+The standard directory where you can keep installed files (not just 
executables) for your own user is the @file{~/.local/} directory.
+The names of hidden files start with a `@key{.}' (dot), so it will not show up 
in your common command-line listings, or on the graphical user interface.
+You can use any other directory, but this is the most recognized.
+
+The top installation directory will be used to keep all the package's 
components: programs (executables), libraries, include (header) files, shared 
data (like manuals), or configuration files (see @ref{Review of library 
fundamentals} for a thorough introduction to headers and linking).
+So it commonly has some of the following sub-directories for each class of 
installed components respectively: @file{bin/}, @file{lib/}, @file{include/} 
@file{man/}, @file{share/}, @file{etc/}.
+Since the @file{PATH} variable is only used for executables, you can add the 
@file{~/.local/bin} directory (which keeps the executables/programs or more 
generally, ``binary'' files) to @file{PATH} with the following command.
+As defined below, first the existing value of @file{PATH} is used, then your 
given directory is added to its end and the combined value is put back in 
@file{PATH} (run `@command{$ echo $PATH}' afterwards to check if it was added).
 
 @example
 $ PATH=$PATH:~/.local/bin
@@ -6514,44 +4921,32 @@ $ PATH=$PATH:~/.local/bin
 @cindex GNU Bash
 @cindex Startup scripts
 @cindex Scripts, startup
-Any executable that you installed in @file{~/.local/bin} will now be usable
-without having to remember and write its full address. However, as soon as
-you leave/close your current terminal session, this modified @file{PATH}
-variable will be forgotten. Adding the directories which contain
-executables to the @file{PATH} environment variable each time you start a
-terminal is also very inconvenient and prone to errors. Fortunately, there
-are standard `startup files' defined by your shell precisely for this (and
-other) purposes. There is a special startup file for every significant
-starting step:
+Any executable that you installed in @file{~/.local/bin} will now be usable 
without having to remember and write its full address.
+However, as soon as you leave/close your current terminal session, this 
modified @file{PATH} variable will be forgotten.
+Adding the directories which contain executables to the @file{PATH} 
environment variable each time you start a terminal is also very inconvenient 
and prone to errors.
+Fortunately, there are standard `startup files' defined by your shell 
precisely for this (and other) purposes.
+There is a special startup file for every significant starting step:
 
 @table @asis
 
 @cindex GNU Bash
 @item @file{/etc/profile} and everything in @file{/etc/profile.d/}
-These startup scripts are called when your whole system starts (for example
-after you turn on your computer). Therefore you need administrator or root
-privileges to access or modify them.
+These startup scripts are called when your whole system starts (for example 
after you turn on your computer).
+Therefore you need administrator or root privileges to access or modify them.
 
 @item @file{~/.bash_profile}
-If you are using (GNU) Bash as your shell, the commands in this file are
-run, when you log in to your account @emph{through Bash}. Most commonly
-when you login through the virtual console (where there is no graphic user
-interface).
+If you are using (GNU) Bash as your shell, the commands in this file are run, 
when you log in to your account @emph{through Bash}.
+Most commonly when you login through the virtual console (where there is no 
graphic user interface).
 
 @item @file{~/.bashrc}
-If you are using (GNU) Bash as your shell, the commands here will be run
-each time you start a terminal and are already logged in. For example, when
-you open your terminal emulator in the graphic user interface.
+If you are using (GNU) Bash as your shell, the commands here will be run each 
time you start a terminal and are already logged in.
+For example, when you open your terminal emulator in the graphic user 
interface.
 
 @end table
 
-For security reasons, it is highly recommended to directly type in your
-@file{HOME} directory value by hand in startup files instead of using
-variables. So in the following, let's assume your user name is
-`@file{name}' (so @file{~} may be replaced with @file{/home/name}). To add
-@file{~/.local/bin} to your @file{PATH} automatically on any startup file,
-you have to ``export'' the new value of @command{PATH} in the startup file
-that is most relevant to you by adding this line:
+For security reasons, it is highly recommended to directly type in your 
@file{HOME} directory value by hand in startup files instead of using variables.
+So in the following, let's assume your user name is `@file{name}' (so @file{~} 
may be replaced with @file{/home/name}).
+To add @file{~/.local/bin} to your @file{PATH} automatically on any startup 
file, you have to ``export'' the new value of @command{PATH} in the startup 
file that is most relevant to you by adding this line:
 
 @example
 export PATH=$PATH:/home/name/.local/bin
@@ -6560,20 +4955,9 @@ export PATH=$PATH:/home/name/.local/bin
 @cindex GNU build system
 @cindex Install directory
 @cindex Directory, install
-Now that you know your system will look into @file{~/.local/bin} for
-executables, you can tell Gnuastro's configure script to install everything
-in the top @file{~/.local} directory using the @option{--prefix}
-option. When you subsequently run @command{$ make install}, all the
-install-able files will be put in their respective directory under
-@file{~/.local/} (the executables in @file{~/.local/bin}, the compiled
-library files in @file{~/.local/lib}, the library header files in
-@file{~/.local/include} and so on, to learn more about these different
-files, please see @ref{Review of library fundamentals}). Note that tilde
-(`@key{~}') expansion will not happen if you put a `@key{=}' between
-@option{--prefix} and @file{~/.local}@footnote{If you insist on using
-`@key{=}', you can use @option{--prefix=$HOME/.local}.}, so we have avoided
-the @key{=} character here which is optional in GNU-style options, see
-@ref{Options}.
+Now that you know your system will look into @file{~/.local/bin} for 
executables, you can tell Gnuastro's configure script to install everything in 
the top @file{~/.local} directory using the @option{--prefix} option.
+When you subsequently run @command{$ make install}, all the install-able files 
will be put in their respective directory under @file{~/.local/} (the 
executables in @file{~/.local/bin}, the compiled library files in 
@file{~/.local/lib}, the library header files in @file{~/.local/include} and so 
on, to learn more about these different files, please see @ref{Review of 
library fundamentals}).
+Note that tilde (`@key{~}') expansion will not happen if you put a `@key{=}' 
between @option{--prefix} and @file{~/.local}@footnote{If you insist on using 
`@key{=}', you can use @option{--prefix=$HOME/.local}.}, so we have avoided the 
@key{=} character here which is optional in GNU-style options, see 
@ref{Options}.
 
 @example
 $ ./configure --prefix ~/.local
@@ -6584,17 +4968,12 @@ $ ./configure --prefix ~/.local
 @cindex @file{LD_LIBRARY_PATH}
 @cindex Library search directory
 @cindex Default library search directory
-You can install everything (including libraries like GSL, CFITSIO, or
-WCSLIB which are Gnuastro's mandatory dependencies, see @ref{Mandatory
-dependencies}) locally by configuring them as above. However, recall that
-@command{PATH} is only for executable files, not libraries and that
-libraries can also depend on other libraries. For example WCSLIB depends on
-CFITSIO and Gnuastro needs both. Therefore, when you installed a library in
-a non-recognized directory, you have to guide the program that depends on
-them to look into the necessary library and header file directories. To do
-that, you have to define the @command{LDFLAGS} and @command{CPPFLAGS}
-environment variables respectively. This can be done while calling
-@file{./configure} as shown below:
+You can install everything (including libraries like GSL, CFITSIO, or WCSLIB 
which are Gnuastro's mandatory dependencies, see @ref{Mandatory dependencies}) 
locally by configuring them as above.
+However, recall that @command{PATH} is only for executable files, not 
libraries and that libraries can also depend on other libraries.
+For example WCSLIB depends on CFITSIO and Gnuastro needs both.
+Therefore, when you installed a library in a non-recognized directory, you 
have to guide the program that depends on them to look into the necessary 
library and header file directories.
+To do that, you have to define the @command{LDFLAGS} and @command{CPPFLAGS} 
environment variables respectively.
+This can be done while calling @file{./configure} as shown below:
 
 @example
 $ ./configure LDFLAGS=-L/home/name/.local/lib            \
@@ -6602,19 +4981,10 @@ $ ./configure LDFLAGS=-L/home/name/.local/lib           
 \
               --prefix ~/.local
 @end example
 
-It can be annoying/buggy to do this when configuring every software that
-depends on such libraries. Hence, you can define these two variables in the
-most relevant startup file (discussed above). The convention on using these
-variables doesn't include a colon to separate values (as
-@command{PATH}-like variables do), they use white space characters and each
-value is prefixed with a compiler option@footnote{These variables are
-ultimately used as options while building the programs, so every value has
-be an option name followed be a value as discussed in @ref{Options}.}: note
-the @option{-L} and @option{-I} above (see @ref{Options}), for @option{-I}
-see @ref{Headers}, and for @option{-L}, see @ref{Linking}. Therefore we
-have to keep the value in double quotation signs to keep the white space
-characters and adding the following two lines to the startup file of
-choice:
+It can be annoying/buggy to do this when configuring every software that 
depends on such libraries.
+Hence, you can define these two variables in the most relevant startup file 
(discussed above).
+The convention on using these variables doesn't include a colon to separate 
values (as @command{PATH}-like variables do), they use white space characters 
and each value is prefixed with a compiler option@footnote{These variables are 
ultimately used as options while building the programs, so every value has be 
an option name followed be a value as discussed in @ref{Options}.}: note the 
@option{-L} and @option{-I} above (see @ref{Options}), for @option{-I} see 
@ref{Headers}, and for @optio [...]
+Therefore we have to keep the value in double quotation signs to keep the 
white space characters and adding the following two lines to the startup file 
of choice:
 
 @example
 export LDFLAGS="$LDFLAGS -L/home/name/.local/lib"
@@ -6622,75 +4992,40 @@ export CPPFLAGS="$CPPFLAGS -I/home/name/.local/include"
 @end example
 
 @cindex Dynamic libraries
-Dynamic libraries are linked to the executable every time you run a program
-that depends on them (see @ref{Linking} to fully understand this important
-concept). Hence dynamic libraries also require a special path variable
-called @command{LD_LIBRARY_PATH} (same formatting as @command{PATH}). To
-use programs that depend on these libraries, you need to add
-@file{~/.local/lib} to your @command{LD_LIBRARY_PATH} environment variable
-by adding the following line to the relevant start-up file:
+Dynamic libraries are linked to the executable every time you run a program 
that depends on them (see @ref{Linking} to fully understand this important 
concept).
+Hence dynamic libraries also require a special path variable called 
@command{LD_LIBRARY_PATH} (same formatting as @command{PATH}).
+To use programs that depend on these libraries, you need to add 
@file{~/.local/lib} to your @command{LD_LIBRARY_PATH} environment variable by 
adding the following line to the relevant start-up file:
 
 @example
 export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/name/.local/lib
 @end example
 
-If you also want to access the Info (see @ref{Info}) and man pages (see
-@ref{Man pages}) documentations add @file{~/.local/share/info} and
-@file{~/.local/share/man} to your @command{INFOPATH}@footnote{Info has the
-following convention: ``If the value of @command{INFOPATH} ends with a
-colon [or it isn't defined] ..., the initial list of directories is
-constructed by appending the build-time default to the value of
-@command{INFOPATH}.'' So when installing in a non-standard directory and if
-@command{INFOPATH} was not initially defined, add a colon to the end of
-@command{INFOPATH} as shown below, otherwise Info will not be able to find
-system-wide installed documentation:@*@command{echo 'export
-INFOPATH=$INFOPATH:/home/name/.local/share/info:' >> ~/.bashrc}@* Note that
-this is only an internal convention of Info, do not use it for other
-@command{*PATH} variables.} and @command{MANPATH} environment variables
-respectively.
+If you also want to access the Info (see @ref{Info}) and man pages (see 
@ref{Man pages}) documentations add @file{~/.local/share/info} and 
@file{~/.local/share/man} to your @command{INFOPATH}@footnote{Info has the 
following convention: ``If the value of @command{INFOPATH} ends with a colon 
[or it isn't defined] ..., the initial list of directories is constructed by 
appending the build-time default to the value of @command{INFOPATH}.'' So when 
installing in a non-standard directory and if [...]
 
 @cindex Search directory order
 @cindex Order in search directory
-A final note is that order matters in the directories that are searched for
-all the variables discussed above. In the examples above, the new directory
-was added after the system specified directories. So if the program,
-library or manuals are found in the system wide directories, the user
-directory is no longer searched. If you want to search your local
-installation first, put the new directory before the already existing list,
-like the example below.
+A final note is that order matters in the directories that are searched for 
all the variables discussed above.
+In the examples above, the new directory was added after the system specified 
directories.
+So if the program, library or manuals are found in the system wide 
directories, the user directory is no longer searched.
+If you want to search your local installation first, put the new directory 
before the already existing list, like the example below.
 
 @example
 export LD_LIBRARY_PATH=/home/name/.local/lib:$LD_LIBRARY_PATH
 @end example
 
 @noindent
-This is good when a library, for example CFITSIO, is already present on the
-system, but the system-wide install wasn't configured with the correct
-configuration flags (see @ref{CFITSIO}), or you want to use a newer version
-and you don't have administrator or root access to update it on the whole
-system/server. If you update @file{LD_LIBRARY_PATH} by placing
-@file{~/.local/lib} first (like above), the linker will first find the
-CFITSIO you installed for yourself and link with it. It thus will never
-reach the system-wide installation.
+This is good when a library, for example CFITSIO, is already present on the 
system, but the system-wide install wasn't configured with the correct 
configuration flags (see @ref{CFITSIO}), or you want to use a newer version and 
you don't have administrator or root access to update it on the whole 
system/server.
+If you update @file{LD_LIBRARY_PATH} by placing @file{~/.local/lib} first 
(like above), the linker will first find the CFITSIO you installed for yourself 
and link with it.
+It thus will never reach the system-wide installation.
 
-There are important security problems with using local installations first:
-all important system-wide executables and libraries (important executables
-like @command{ls} and @command{cp}, or libraries like the C library) can be
-replaced by non-secure versions with the same file names and put in the
-customized directory (@file{~/.local} in this example). So if you choose to
-search in your customized directory first, please @emph{be sure} to keep it
-clean from executables or libraries with the same names as important system
-programs or libraries.
+There are important security problems with using local installations first: 
all important system-wide executables and libraries (important executables like 
@command{ls} and @command{cp}, or libraries like the C library) can be replaced 
by non-secure versions with the same file names and put in the customized 
directory (@file{~/.local} in this example).
+So if you choose to search in your customized directory first, please @emph{be 
sure} to keep it clean from executables or libraries with the same names as 
important system programs or libraries.
 
 @cartouche
 @noindent
-@strong{Summary:} When you are using a server which doesn't give you
-administrator/root access AND you would like to give priority to your own
-built programs and libraries, not the version that is (possibly already)
-present on the server, add these lines to your startup file. See above for
-which startup file is best for your case and for a detailed explanation on
-each. Don't forget to replace `@file{/YOUR-HOME-DIR}' with your home
-directory (for example `@file{/home/your-id}'):
+@strong{Summary:} When you are using a server which doesn't give you 
administrator/root access AND you would like to give priority to your own built 
programs and libraries, not the version that is (possibly already) present on 
the server, add these lines to your startup file.
+See above for which startup file is best for your case and for a detailed 
explanation on each.
+Don't forget to replace `@file{/YOUR-HOME-DIR}' with your home directory (for 
example `@file{/home/your-id}'):
 
 @example
 export PATH="/YOUR-HOME-DIR/.local/bin:$PATH"
@@ -6702,10 +5037,8 @@ export 
LD_LIBRARY_PATH="/YOUR-HOME-DIR/.local/lib:$LD_LIBRARY_PATH"
 @end example
 
 @noindent
-Afterwards, you just need to add an extra
-@option{--prefix=/YOUR-HOME-DIR/.local} to the @file{./configure} command
-of the software that you intend to install. Everything else will be the
-same as a standard build and install, see @ref{Quick start}.
+Afterwards, you just need to add an extra 
@option{--prefix=/YOUR-HOME-DIR/.local} to the @file{./configure} command of 
the software that you intend to install.
+Everything else will be the same as a standard build and install, see 
@ref{Quick start}.
 @end cartouche
 
 @node Executable names, Configure and build in RAM, Installation directory, 
Configuring
@@ -6713,52 +5046,38 @@ same as a standard build and install, see @ref{Quick 
start}.
 
 @cindex Executable names
 @cindex Names of executables
-At first sight, the names of the executables for each program might seem to
-be uncommonly long, for example @command{astnoisechisel} or
-@command{astcrop}. We could have chosen terse (and cryptic) names like most
-programs do. We chose this complete naming convention (something like the
-commands in @TeX{}) so you don't have to spend too much time remembering
-what the name of a specific program was. Such complete names also enable
-you to easily search for the programs.
+At first sight, the names of the executables for each program might seem to be 
uncommonly long, for example @command{astnoisechisel} or @command{astcrop}.
+We could have chosen terse (and cryptic) names like most programs do.
+We chose this complete naming convention (something like the commands in 
@TeX{}) so you don't have to spend too much time remembering what the name of a 
specific program was.
+Such complete names also enable you to easily search for the programs.
 
 @cindex Shell auto-complete
 @cindex Auto-complete in the shell
-To facilitate typing the names in, we suggest using the shell
-auto-complete. With this facility you can find the executable you want
-very easily. It is very similar to file name completion in the
-shell. For example, simply by typing the letters below (where
-@key{[TAB]} stands for the Tab key on your keyboard)
+To facilitate typing the names in, we suggest using the shell auto-complete.
+With this facility you can find the executable you want very easily.
+It is very similar to file name completion in the shell.
+For example, simply by typing the letters below (where @key{[TAB]} stands for 
the Tab key on your keyboard)
 
 @example
 $ ast[TAB][TAB]
 @end example
 
 @noindent
-you will get the list of all the available executables that start with
-@command{ast} in your @command{PATH} environment variable
-directories. So, all the Gnuastro executables installed on your system
-will be listed. Typing the next letter for the specific program you
-want along with a Tab, will limit this list until you get to your
-desired program.
+you will get the list of all the available executables that start with 
@command{ast} in your @command{PATH} environment variable directories.
+So, all the Gnuastro executables installed on your system will be listed.
+Typing the next letter for the specific program you want along with a Tab, 
will limit this list until you get to your desired program.
 
 @cindex Names, customize
 @cindex Customize executable names
-In case all of this does not convince you and you still want to type
-short names, some suggestions are given below. You should have in mind
-though, that if you are writing a shell script that you might want to
-pass on to others, it is best to use the standard name because other
-users might not have adopted the same customization. The long names
-also serve as a form of documentation in such scripts. A similar
-reasoning can be given for option names in scripts: it is good
-practice to always use the long formats of the options in shell
-scripts, see @ref{Options}.
+In case all of this does not convince you and you still want to type short 
names, some suggestions are given below.
+You should have in mind though, that if you are writing a shell script that 
you might want to pass on to others, it is best to use the standard name 
because other users might not have adopted the same customization.
+The long names also serve as a form of documentation in such scripts.
+A similar reasoning can be given for option names in scripts: it is good 
practice to always use the long formats of the options in shell scripts, see 
@ref{Options}.
 
 @cindex Symbolic link
-The simplest solution is making a symbolic link to the actual
-executable. For example let's assume you want to type @file{ic} to run Crop
-instead of @file{astcrop}. Assuming you installed Gnuastro executables in
-@file{/usr/local/bin} (default) you can do this simply by running the
-following command as root:
+The simplest solution is making a symbolic link to the actual executable.
+For example let's assume you want to type @file{ic} to run Crop instead of 
@file{astcrop}.
+Assuming you installed Gnuastro executables in @file{/usr/local/bin} (default) 
you can do this simply by running the following command as root:
 
 @example
 # ln -s /usr/local/bin/astcrop /usr/local/bin/ic
@@ -6772,32 +5091,21 @@ works.
 @vindex --program-prefix
 @vindex --program-suffix
 @vindex --program-transform-name
-The installed executable names can also be set using options to
-@command{$ ./configure}, see @ref{Configuring}. GNU Autoconf (which
-configures Gnuastro for your particular system), allows the builder
-to change the name of programs with the three options
-@option{--program-prefix}, @option{--program-suffix} and
-@option{--program-transform-name}. The first two are for adding a
-fixed prefix or suffix to all the programs that will be installed.
-This will actually make all the names longer!  You can use it to add
-versions of program names to the programs in order to simultaneously
-have two executable versions of a program.
+The installed executable names can also be set using options to @command{$ 
./configure}, see @ref{Configuring}.
+GNU Autoconf (which configures Gnuastro for your particular system), allows 
the builder to change the name of programs with the three options 
@option{--program-prefix}, @option{--program-suffix} and 
@option{--program-transform-name}.
+The first two are for adding a fixed prefix or suffix to all the programs that 
will be installed.
+This will actually make all the names longer!  You can use it to add versions 
of program names to the programs in order to simultaneously have two executable 
versions of a program.
 
 @cindex SED, stream editor
 @cindex Stream editor, SED
-The third configure option allows you to set the executable name at install
-time using the SED program. SED is a very useful `stream editor'. There are
-various resources on the internet to use it effectively. However, we should
-caution that using configure options will change the actual executable name
-of the installed program and on every re-install (an update for example),
-you have to also add this option to keep the old executable name
-updated. Also note that the documentation or configuration files do not
-change from their standard names either.
+The third configure option allows you to set the executable name at install 
time using the SED program.
+SED is a very useful `stream editor'.
+There are various resources on the internet to use it effectively.
+However, we should caution that using configure options will change the actual 
executable name of the installed program and on every re-install (an update for 
example), you have to also add this option to keep the old executable name 
updated.
+Also note that the documentation or configuration files do not change from 
their standard names either.
 
 @cindex Removing @file{ast} from executables
-For example, let's assume that typing @file{ast} on every invocation
-of every program is really annoying you! You can remove this prefix
-from all the executables at configure time by adding this option:
+For example, let's assume that typing @file{ast} on every invocation of every 
program is really annoying you! You can remove this prefix from all the 
executables at configure time by adding this option:
 
 @example
 $ ./configure --program-transform-name='s/ast/ /'
@@ -6810,11 +5118,9 @@ $ ./configure --program-transform-name='s/ast/ /'
 
 @cindex File I/O
 @cindex Input/Output, file
-Gnuastro's configure and build process (the GNU build system) involves the
-creation, reading, and modification of a large number of files
-(input/output, or I/O). Therefore file I/O issues can directly affect the
-work of developers who need to configure and build Gnuastro numerous
-times. Some of these issues are listed below:
+Gnuastro's configure and build process (the GNU build system) involves the 
creation, reading, and modification of a large number of files (input/output, 
or I/O).
+Therefore file I/O issues can directly affect the work of developers who need 
to configure and build Gnuastro numerous times.
+Some of these issues are listed below:
 
 @itemize
 @cindex HDD
@@ -6825,37 +5131,26 @@ SSDs (decreasing the lifetime).
 
 @cindex Backup
 @item
-Having the built files mixed with the source files can greatly affect
-backing up (synchronization) of source files (since it involves the
-management of a large number of small files that are regularly
-changed. Backup software can of course be configured to ignore the built
-files and directories. However, since the built files are mixed with the
-source files and can have a large variety, this will require a high level
-of customization.
+Having the built files mixed with the source files can greatly affect backing 
up (synchronization) of source files (since it involves the management of a 
large number of small files that are regularly changed.
+Backup software can of course be configured to ignore the built files and 
directories.
+However, since the built files are mixed with the source files and can have a 
large variety, this will require a high level of customization.
 @end itemize
 
 @cindex tmpfs file system
 @cindex file systems, tmpfs
-One solution to address both these problems is to use the
-@url{https://en.wikipedia.org/wiki/Tmpfs, tmpfs file system}. Any file in
-tmpfs is actually stored in the RAM (and possibly SAWP), not on HDDs or
-SSDs. The RAM is built for extensive and fast I/O. Therefore the large
-number of file I/Os associated with configuring and building will not harm
-the HDDs or SSDs. Due to the volatile nature of RAM, files in the tmpfs
-file-system will be permanently lost after a power-off. Since all configured
-and built files are derivative files (not files that have been directly
-written by hand) there is no problem in this and this feature can be
-considered as an automatic cleanup.
+One solution to address both these problems is to use the 
@url{https://en.wikipedia.org/wiki/Tmpfs, tmpfs file system}.
+Any file in tmpfs is actually stored in the RAM (and possibly SAWP), not on 
HDDs or SSDs.
+The RAM is built for extensive and fast I/O.
+Therefore the large number of file I/Os associated with configuring and 
building will not harm the HDDs or SSDs.
+Due to the volatile nature of RAM, files in the tmpfs file-system will be 
permanently lost after a power-off.
+Since all configured and built files are derivative files (not files that have 
been directly written by hand) there is no problem in this and this feature can 
be considered as an automatic cleanup.
 
 @cindex Linux kernel
 @cindex GNU C library
 @cindex GNU build system
-The modern GNU C library (and thus the Linux kernel) defines the
-@file{/dev/shm} directory for this purpose in the RAM (POSIX shared
-memory). To build in it, you can use the GNU build system's ability to
-build in a separate directory (not necessarily in the source directory) as
-shown below. Just set @file{SRCDIR} as the address of Gnuastro's top source
-directory (for example, the unpacked tarball).
+The modern GNU C library (and thus the Linux kernel) defines the 
@file{/dev/shm} directory for this purpose in the RAM (POSIX shared memory).
+To build in it, you can use the GNU build system's ability to build in a 
separate directory (not necessarily in the source directory) as shown below.
+Just set @file{SRCDIR} as the address of Gnuastro's top source directory (for 
example, the unpacked tarball).
 
 @example
 $ mkdir /dev/shm/tmp-gnuastro-build
@@ -6864,144 +5159,111 @@ $ SRCDIR/configure --srcdir=SRCDIR
 $ make
 @end example
 
-Gnuastro comes with a script to simplify this process of configuring and
-building in a different directory (a ``clean'' build), for more see
-@ref{Separate build and source directories}.
+Gnuastro comes with a script to simplify this process of configuring and 
building in a different directory (a ``clean'' build), for more see 
@ref{Separate build and source directories}.
 
 @node Separate build and source directories, Tests, Configuring, Build and 
install
 @subsection Separate build and source directories
 
 The simple steps of @ref{Quick start} will mix the source and built files.
-This can cause inconvenience for developers or enthusiasts following the
-the most recent work (see @ref{Version controlled source}). The current
-section is mainly focused on this later group of Gnuastro users. If you
-just install Gnuastro on major releases (following @ref{Announcements}),
-you can safely ignore this section.
+This can cause inconvenience for developers or enthusiasts following the the 
most recent work (see @ref{Version controlled source}).
+The current section is mainly focused on this later group of Gnuastro users.
+If you just install Gnuastro on major releases (following 
@ref{Announcements}), you can safely ignore this section.
 
 @cindex GNU build system
-When it is necessary to keep the source (which is under version control),
-but not the derivative (built) files (after checking or installing), the
-best solution is to keep the source and the built files in separate
-directories. One application of this is already discussed in @ref{Configure
-and build in RAM}.
-
-To facilitate this process of configuring and building in a separate
-directory, Gnuastro comes with the @file{developer-build} script. It is
-available in the top source directory and is @emph{not} installed. It will
-make a directory under a given top-level directory (given to
-@option{--top-build-dir}) and build Gnuastro in there directory. It thus
-keeps the source completely separated from the built files. For easy access
-to the built files, it also makes a symbolic link to the built directory in
-the top source files called @file{build}.
-
-When run without any options, default values will be used for its
-configuration. As with Gnuastro's programs, you can inspect the default
-values with @option{-P} (or @option{--printparams}, the output just looks a
-little different here). The default top-level build directory is
-@file{/dev/shm}: the shared memory directory in RAM on GNU/Linux systems as
-described in @ref{Configure and build in RAM}.
+When it is necessary to keep the source (which is under version control), but 
not the derivative (built) files (after checking or installing), the best 
solution is to keep the source and the built files in separate directories.
+One application of this is already discussed in @ref{Configure and build in 
RAM}.
+
+To facilitate this process of configuring and building in a separate 
directory, Gnuastro comes with the @file{developer-build} script.
+It is available in the top source directory and is @emph{not} installed.
+It will make a directory under a given top-level directory (given to 
@option{--top-build-dir}) and build Gnuastro in there directory.
+It thus keeps the source completely separated from the built files.
+For easy access to the built files, it also makes a symbolic link to the built 
directory in the top source files called @file{build}.
+
+When run without any options, default values will be used for its 
configuration.
+As with Gnuastro's programs, you can inspect the default values with 
@option{-P} (or @option{--printparams}, the output just looks a little 
different here).
+The default top-level build directory is @file{/dev/shm}: the shared memory 
directory in RAM on GNU/Linux systems as described in @ref{Configure and build 
in RAM}.
 
 @cindex Debug
-Besides these, it also has some features to facilitate the job of
-developers or bleeding edge users like the @option{--debug} option to do a
-fast build, with debug information, no optimization, and no shared
-libraries. Here is the full list of options you can feed to this script to
-configure its operations.
+Besides these, it also has some features to facilitate the job of developers 
or bleeding edge users like the @option{--debug} option to do a fast build, 
with debug information, no optimization, and no shared libraries.
+Here is the full list of options you can feed to this script to configure its 
operations.
 
 @cartouche
 @noindent
 @strong{Not all Gnuastro's common program behavior usable here:}
-@file{developer-build} is just a non-installed script with a very limited
-scope as described above. It thus doesn't have all the common option
-behaviors or configuration files for example.
+@file{developer-build} is just a non-installed script with a very limited 
scope as described above.
+It thus doesn't have all the common option behaviors or configuration files 
for example.
 @end cartouche
 
 @cartouche
 @noindent
 @strong{White space between option and value:} @file{developer-build}
-doesn't accept an @key{=} sign between the options and their values. It
-also needs at least one character between the option and its
-value. Therefore @option{-n 4} or @option{--numthreads 4} are acceptable,
-while @option{-n4}, @option{-n=4}, or @option{--numthreads=4}
-aren't. Finally multiple short option names cannot be merged: for example
-you can say @option{-c -n 4}, but unlike Gnuastro's programs, @option{-cn4}
-is not acceptable.
+doesn't accept an @key{=} sign between the options and their values.
+It also needs at least one character between the option and its value.
+Therefore @option{-n 4} or @option{--numthreads 4} are acceptable, while 
@option{-n4}, @option{-n=4}, or @option{--numthreads=4} aren't.
+Finally multiple short option names cannot be merged: for example you can say 
@option{-c -n 4}, but unlike Gnuastro's programs, @option{-cn4} is not 
acceptable.
 @end cartouche
 
 @cartouche
 @noindent
-@strong{Reusable for other packages:} This script can be used in any
-software which is configured and built using the GNU Build System. Just
-copy it in the top source directory of that software and run it from there.
+@strong{Reusable for other packages:} This script can be used in any software 
which is configured and built using the GNU Build System.
+Just copy it in the top source directory of that software and run it from 
there.
 @end cartouche
 
 @table @option
 
 @item -b STR
 @itemx --top-build-dir STR
-The top build directory to make a directory for the build. If this option
-isn't called, the top build directory is @file{/dev/shm} (only available in
-GNU/Linux operating systems, see @ref{Configure and build in RAM}).
+The top build directory to make a directory for the build.
+If this option isn't called, the top build directory is @file{/dev/shm} (only 
available in GNU/Linux operating systems, see @ref{Configure and build in RAM}).
 
 @item -V
 @itemx --version
-Print the version string of Gnuastro that will be used in the build. This
-string will be appended to the directory name containing the built files.
+Print the version string of Gnuastro that will be used in the build.
+This string will be appended to the directory name containing the built files.
 
 @item -a
 @itemx --autoreconf
-Run @command{autoreconf -f} before building the package. In Gnuastro, this
-is necessary when a new commit has been made to the project history. In
-Gnuastro's build system, the Git description will be used as the version,
-see @ref{Version numbering} and @ref{Synchronizing}.
+Run @command{autoreconf -f} before building the package.
+In Gnuastro, this is necessary when a new commit has been made to the project 
history.
+In Gnuastro's build system, the Git description will be used as the version, 
see @ref{Version numbering} and @ref{Synchronizing}.
 
 @item -c
 @itemx --clean
 @cindex GNU Autoreconf
-Delete the contents of the build directory (clean it) before starting the
-configuration and building of this run.
+Delete the contents of the build directory (clean it) before starting the 
configuration and building of this run.
 
-This is useful when you have recently pulled changes from the main Git
-repository, or committed a change your self and ran @command{autoreconf -f},
-see @ref{Synchronizing}. After running GNU Autoconf, the version will be
-updated and you need to do a clean build.
+This is useful when you have recently pulled changes from the main Git 
repository, or committed a change your self and ran @command{autoreconf -f}, 
see @ref{Synchronizing}.
+After running GNU Autoconf, the version will be updated and you need to do a 
clean build.
 
 @item -d
 @itemx --debug
 @cindex Valgrind
 @cindex GNU Debugger (GDB)
-Build with debugging flags (for example to use in GNU Debugger, also known
-as GDB, or Valgrind), disable optimization and also the building of shared
-libraries. Similar to running the configure script of below
+Build with debugging flags (for example to use in GNU Debugger, also known as 
GDB, or Valgrind), disable optimization and also the building of shared 
libraries.
+Similar to running the configure script of below
 
 @example
 $ ./configure --enable-debug
 @end example
 
-Besides all the debugging advantages of building with this option, it will
-also be significantly speed up the build (at the cost of slower built
-programs). So when you are testing something small or working on the build
-system itself, it will be much faster to test your work with this option.
+Besides all the debugging advantages of building with this option, it will 
also be significantly speed up the build (at the cost of slower built programs).
+So when you are testing something small or working on the build system itself, 
it will be much faster to test your work with this option.
 
 @item -v
 @itemx --valgrind
 @cindex Valgrind
-Build all @command{make check} tests within Valgrind. For more, see the
-description of @option{--enable-check-with-valgrind} in @ref{Gnuastro
-configure options}.
+Build all @command{make check} tests within Valgrind.
+For more, see the description of @option{--enable-check-with-valgrind} in 
@ref{Gnuastro configure options}.
 
 @item -j INT
 @itemx --jobs INT
-The maximum number of threads/jobs for Make to build at any moment. As the
-name suggests (Make has an identical option), the number given to this
-option is directly passed on to any call of Make with its @option{-j}
-option.
+The maximum number of threads/jobs for Make to build at any moment.
+As the name suggests (Make has an identical option), the number given to this 
option is directly passed on to any call of Make with its @option{-j} option.
 
 @item -C
 @itemx --check
-After finishing the build, also run @command{make check}. By default,
-@command{make check} isn't run because the developer usually has their own
-checks to work on (for example defined in @file{tests/during-dev.sh}).
+After finishing the build, also run @command{make check}.
+By default, @command{make check} isn't run because the developer usually has 
their own checks to work on (for example defined in @file{tests/during-dev.sh}).
 
 @item -i
 @itemx --install
@@ -7009,44 +5271,32 @@ After finishing the build, also run @command{make 
install}.
 
 @item -D
 @itemx --dist
-Run @code{make dist-lzip pdf} to build a distribution tarball (in
-@file{.tar.lz} format) and a PDF manual. This can be useful for archiving,
-or sending to colleagues who don't use Git for an easy build and manual.
+Run @code{make dist-lzip pdf} to build a distribution tarball (in 
@file{.tar.lz} format) and a PDF manual.
+This can be useful for archiving, or sending to colleagues who don't use Git 
for an easy build and manual.
 
 @item -u STR
 @item --upload STR
-Activate the @option{--dist} (@option{-D}) option, then use secure copy
-(@command{scp}, part of the SSH tools) to copy the tarball and PDF to the
-@file{src} and @file{pdf} sub-directories of the specified server and its
-directory (value to this option). For example @command{--upload
-my-server:dir}, will copy the tarball in the @file{dir/src}, and the PDF
-manual in @file{dir/pdf} of @code{my-server} server. It will then make a
-symbolic link in the top server directory to the tarball that is called
-@file{gnuastro-latest.tar.lz}.
+Activate the @option{--dist} (@option{-D}) option, then use secure copy 
(@command{scp}, part of the SSH tools) to copy the tarball and PDF to the 
@file{src} and @file{pdf} sub-directories of the specified server and its 
directory (value to this option).
+For example @command{--upload my-server:dir}, will copy the tarball in the 
@file{dir/src}, and the PDF manual in @file{dir/pdf} of @code{my-server} server.
+It will then make a symbolic link in the top server directory to the tarball 
that is called @file{gnuastro-latest.tar.lz}.
 
 @item -p
 @itemx --publish
-Short for @option{--autoreconf --clean --debug --check --upload
-STR}. @option{--debug} is added because it will greatly speed up the
-build. It will have no effect on the produced tarball. This is good when
-you have made a commit and are ready to publish it on your server (if
-nothing crashes). Recall that if any of the previous steps fail the script
-aborts.
+Short for @option{--autoreconf --clean --debug --check --upload STR}.
+@option{--debug} is added because it will greatly speed up the build.
+It will have no effect on the produced tarball.
+This is good when you have made a commit and are ready to publish it on your 
server (if nothing crashes).
+Recall that if any of the previous steps fail the script aborts.
 
 @item -I
 @item --install-archive
-Short for @option{--autoreconf --clean --check --install --dist}. This is
-useful when you actually want to install the commit you just made (if the
-build and checks succeed). It will also produce a distribution tarball and
-PDF manual for easy access to the installed tarball on your system at a
-later time.
-
-Ideally, Gnuastro's Git version history makes it easy for a prepared system
-to revert back to a different point in history. But Gnuastro also needs to
-bootstrap files and also your collaborators might (usually do!) find it too
-much of a burden to do the bootstrapping themselves. So it is convenient to
-have a tarball and PDF manual of the version you have installed (and are
-using in your research) handily available.
+Short for @option{--autoreconf --clean --check --install --dist}.
+This is useful when you actually want to install the commit you just made (if 
the build and checks succeed).
+It will also produce a distribution tarball and PDF manual for easy access to 
the installed tarball on your system at a later time.
+
+Ideally, Gnuastro's Git version history makes it easy for a prepared system to 
revert back to a different point in history.
+But Gnuastro also needs to bootstrap files and also your collaborators might 
(usually do!) find it too much of a burden to do the bootstrapping themselves.
+So it is convenient to have a tarball and PDF manual of the version you have 
installed (and are using in your research) handily available.
 
 @item -h
 @itemx --help
@@ -7066,35 +5316,26 @@ current values.
 @cindex @file{mock.fits}
 @cindex Tests, running
 @cindex Checking tests
-After successfully building (compiling) the programs with the @command{$
-make} command you can check the installation before installing. To run the
-tests, run
+After successfully building (compiling) the programs with the @command{$ make} 
command you can check the installation before installing.
+To run the tests, run
 
 @example
 $ make check
 @end example
 
-For every program some tests are designed to check some possible
-operations. Running the command above will run those tests and give
-you a final report. If everything is OK and you have built all the
-programs, all the tests should pass. In case any of the tests fail,
-please have a look at @ref{Known issues} and if that still doesn't fix
-your problem, look that the @file{./tests/test-suite.log} file to see
-if the source of the error is something particular to your system or
-more general. If you feel it is general, please contact us because it
-might be a bug. Note that the tests of some programs depend on the
-outputs of other program's tests, so if you have not installed them
-they might be skipped or fail. Prior to releasing every distribution
-all these tests are checked. If you have a reasonably modern terminal,
-the outputs of the successful tests will be colored green and the
-failed ones will be colored red.
+For every program some tests are designed to check some possible operations.
+Running the command above will run those tests and give you a final report.
+If everything is OK and you have built all the programs, all the tests should 
pass.
+In case any of the tests fail, please have a look at @ref{Known issues} and if 
that still doesn't fix your problem, look that the 
@file{./tests/test-suite.log} file to see if the source of the error is 
something particular to your system or more general.
+If you feel it is general, please contact us because it might be a bug.
+Note that the tests of some programs depend on the outputs of other program's 
tests, so if you have not installed them they might be skipped or fail.
+Prior to releasing every distribution all these tests are checked.
+If you have a reasonably modern terminal, the outputs of the successful tests 
will be colored green and the failed ones will be colored red.
 
-These scripts can also act as a good set of examples for you to see how the
-programs are run. All the tests are in the @file{tests/} directory. The
-tests for each program are shell scripts (ending with @file{.sh}) in a
-sub-directory of this directory with the same name as the program. See
-@ref{Test scripts} for more detailed information about these scripts in case
-you want to inspect them.
+These scripts can also act as a good set of examples for you to see how the 
programs are run.
+All the tests are in the @file{tests/} directory.
+The tests for each program are shell scripts (ending with @file{.sh}) in a 
sub-directory of this directory with the same name as the program.
+See @ref{Test scripts} for more detailed information about these scripts in 
case you want to inspect them.
 
 
 
@@ -7108,46 +5349,40 @@ you want to inspect them.
 @cindex US letter paper size
 @cindex Paper size, A4
 @cindex Paper size, US letter
-The default print version of this book is provided in the letter paper
-size. If you would like to have the print version of this book on
-paper and you are living in a country which uses A4, then you can
-rebuild the book. The great thing about the GNU build system is that
-the book source code which is in Texinfo is also distributed with
-the program source code, enabling you to do such customization
-(hacking).
+The default print version of this book is provided in the letter paper size.
+If you would like to have the print version of this book on paper and you are 
living in a country which uses A4, then you can rebuild the book.
+The great thing about the GNU build system is that the book source code which 
is in Texinfo is also distributed with the program source code, enabling you to 
do such customization (hacking).
 
 @cindex GNU Texinfo
-In order to change the paper size, you will need to have GNU Texinfo
-installed. Open @file{doc/gnuastro.texi} with any text editor. This is the
-source file that created this book. In the first few lines you will see
-this line:
+In order to change the paper size, you will need to have GNU Texinfo installed.
+Open @file{doc/gnuastro.texi} with any text editor.
+This is the source file that created this book.
+In the first few lines you will see this line:
 
 @example
 @@c@@afourpaper
 @end example
 
 @noindent
-In Texinfo, a line is commented with @code{@@c}. Therefore, un-comment this
-line by deleting the first two characters such that it changes to:
+In Texinfo, a line is commented with @code{@@c}.
+Therefore, un-comment this line by deleting the first two characters such that 
it changes to:
 
 @example
 @@afourpaper
 @end example
 
 @noindent
-Save the file and close it. You can now run
+Save the file and close it.
+You can now run the following command
 
 @example
 $ make pdf
 @end example
 
 @noindent
-and the new PDF book will be available in
-@file{SRCdir/doc/gnuastro.pdf}. By changing the @command{pdf} in
-@command{$ make pdf} to @command{ps} or @command{dvi} you can have the
-book in those formats. Note that you can do this for any book that
-is in Texinfo format, they might not have @code{@@afourpaper} line, so
-you can add it close to the top of the Texinfo source file.
+and the new PDF book will be available in @file{SRCdir/doc/gnuastro.pdf}.
+By changing the @command{pdf} in @command{$ make pdf} to @command{ps} or 
@command{dvi} you can have the book in those formats.
+Note that you can do this for any book that is in Texinfo format, they might 
not have @code{@@afourpaper} line, so you can add it close to the top of the 
Texinfo source file.
 
 
 
@@ -7155,37 +5390,32 @@ you can add it close to the top of the Texinfo source 
file.
 @node Known issues,  , A4 print book, Build and install
 @subsection Known issues
 
-Depending on your operating system and the version of the compiler you
-are using, you might confront some known problems during the
-configuration (@command{$ ./configure}), compilation (@command{$
-make}) and tests (@command{$ make check}). Here, their solutions are
-discussed.
+Depending on your operating system and the version of the compiler you are 
using, you might confront some known problems during the configuration 
(@command{$ ./configure}), compilation (@command{$ make}) and tests (@command{$ 
make check}).
+Here, their solutions are discussed.
 
 @itemize
 @cindex Configuration, not finding library
 @cindex Development packages
 @item
-@command{$ ./configure}: @emph{Configure complains about not finding a
-library even though you have installed it.} The possible solution is
-based on how you installed the package:
+@command{$ ./configure}: @emph{Configure complains about not finding a library 
even though you have installed it.}
+The possible solution is based on how you installed the package:
 
 @itemize
 @item
-From your distribution's package manager. Most probably this is
-because your distribution has separated the header files of a library
-from the library parts. Please also install the `development' packages
-for those libraries too. Just add a @file{-dev} or @file{-devel} to
-the end of the package name and re-run the package manager. This will
-not happen if you install the libraries from source. When installed
-from source, the headers are also installed.
+From your distribution's package manager.
+Most probably this is because your distribution has separated the header files 
of a library from the library parts.
+Please also install the `development' packages for those libraries too.
+Just add a @file{-dev} or @file{-devel} to the end of the package name and 
re-run the package manager.
+This will not happen if you install the libraries from source.
+When installed from source, the headers are also installed.
 
 @item
 @cindex @command{LDFLAGS}
-From source. Then your linker is not looking where you installed the
-library. If you followed the instructions in this chapter, all the
-libraries will be installed in @file{/usr/local/lib}. So you have to tell
-your linker to look in this directory. To do so, configure Gnuastro like
-this:
+From source.
+Then your linker is not looking where you installed the library.
+If you followed the instructions in this chapter, all the libraries will be 
installed in @file{/usr/local/lib}.
+So you have to tell your linker to look in this directory.
+To do so, configure Gnuastro like this:
 
 @example
 $ ./configure LDFLAGS="-L/usr/local/lib"
@@ -7201,96 +5431,64 @@ directory}.
 @vindex --enable-gnulibcheck
 @cindex Gnulib: GNU Portability Library
 @cindex GNU Portability Library (Gnulib)
-@command{$ make}: @emph{Complains about an unknown function on a
-non-GNU based operating system.} In this case, please run @command{$
-./configure} with the @option{--enable-gnulibcheck} option to see if
-the problem is from the GNU Portability Library (Gnulib) not
-supporting your system or if there is a problem in Gnuastro, see
-@ref{Gnuastro configure options}. If the problem is not
-in Gnulib
-and after all its tests you get the same complaint from
-@command{make}, then please contact us at
-@file{bug-gnuastro@@gnu.org}. The cause is probably that a function
-that we have used is not supported by your operating system and we
-didn't included it along with the source tar ball. If the function is
-available in Gnulib, it can be fixed immediately.
+@command{$ make}: @emph{Complains about an unknown function on a non-GNU based 
operating system.}
+In this case, please run @command{$ ./configure} with the 
@option{--enable-gnulibcheck} option to see if the problem is from the GNU 
Portability Library (Gnulib) not supporting your system or if there is a 
problem in Gnuastro, see @ref{Gnuastro configure options}.
+If the problem is not in Gnulib and after all its tests you get the same 
complaint from @command{make}, then please contact us at 
@file{bug-gnuastro@@gnu.org}.
+The cause is probably that a function that we have used is not supported by 
your operating system and we didn't included it along with the source tar ball.
+If the function is available in Gnulib, it can be fixed immediately.
 
 @item
 @cindex @command{CPPFLAGS}
-@command{$ make}: @emph{Can't find the headers (.h files) of installed
-libraries.} Your C pre-processor (CPP) isn't looking in the right place. To
-fix this, configure Gnuastro with an additional @code{CPPFLAGS} like below
-(assuming the library is installed in @file{/usr/local/include}:
+@command{$ make}: @emph{Can't find the headers (.h files) of installed 
libraries.}
+Your C pre-processor (CPP) isn't looking in the right place.
+To fix this, configure Gnuastro with an additional @code{CPPFLAGS} like below 
(assuming the library is installed in @file{/usr/local/include}:
 
 @example
 $ ./configure CPPFLAGS="-I/usr/local/include"
 @end example
 
-If you want to use the libraries for your other programming projects, then
-export this environment variable in a start-up script similar to the case
-for @file{LD_LIBRARY_PATH} explained below, also see @ref{Installation
-directory}.
+If you want to use the libraries for your other programming projects, then 
export this environment variable in a start-up script similar to the case for 
@file{LD_LIBRARY_PATH} explained below, also see @ref{Installation directory}.
 
 @cindex Tests, only one passes
 @cindex @file{LD_LIBRARY_PATH}
 @item
-@command{$ make check}: @emph{Only the first couple of tests pass, all the
-rest fail or get skipped.}  It is highly likely that when searching for
-shared libraries, your system doesn't look into the @file{/usr/local/lib}
-directory (or wherever you installed Gnuastro or its dependencies). To make
-sure it is added to the list of directories, add the following line to your
-@file{~/.bashrc} file and restart your terminal. Don't forget to change
-@file{/usr/local/lib} if the libraries are installed in other
-(non-standard) directories.
+@command{$ make check}: @emph{Only the first couple of tests pass, all the 
rest fail or get skipped.}  It is highly likely that when searching for shared 
libraries, your system doesn't look into the @file{/usr/local/lib} directory 
(or wherever you installed Gnuastro or its dependencies).
+To make sure it is added to the list of directories, add the following line to 
your @file{~/.bashrc} file and restart your terminal.
+Don't forget to change @file{/usr/local/lib} if the libraries are installed in 
other (non-standard) directories.
 
 @example
 export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/lib"
 @end example
 
-You can also add more directories by using a colon `@code{:}' to separate
-them. See @ref{Installation directory} and @ref{Linking} to learn more on
-the @code{PATH} variables and dynamic linking respectively.
+You can also add more directories by using a colon `@code{:}' to separate them.
+See @ref{Installation directory} and @ref{Linking} to learn more on the 
@code{PATH} variables and dynamic linking respectively.
 
 @cindex GPL Ghostscript
 @item
-@command{$ make check}: @emph{The tests relying on external programs
-(for example @file{fitstopdf.sh} fail.}) This is probably due to the
-fact that the version number of the external programs is too old for
-the tests we have preformed. Please update the program to a more
-recent version. For example to create a PDF image, you will need GPL
-Ghostscript, but older versions do not work, we have successfully
-tested it on version 9.15. Older versions might cause a failure in the
-test result.
+@command{$ make check}: @emph{The tests relying on external programs (for 
example @file{fitstopdf.sh} fail.}) This is probably due to the fact that the 
version number of the external programs is too old for the tests we have 
preformed.
+Please update the program to a more recent version.
+For example to create a PDF image, you will need GPL Ghostscript, but older 
versions do not work, we have successfully tested it on version 9.15.
+Older versions might cause a failure in the test result.
 
 @item
 @cindex @TeX{}
 @cindex GNU Texinfo
-@command{$ make pdf}: @emph{The PDF book cannot be made.} To make a
-PDF book, you need to have the GNU Texinfo program (like any
-program, the more recent the better). A working @TeX{} program is also
-necessary, which you can get from Tex
-Live@footnote{@url{https://www.tug.org/texlive/}}.
+@command{$ make pdf}: @emph{The PDF book cannot be made.}
+To make a PDF book, you need to have the GNU Texinfo program (like any 
program, the more recent the better).
+A working @TeX{} program is also necessary, which you can get from Tex 
Live@footnote{@url{https://www.tug.org/texlive/}}.
 
 @item
 @cindex GNU Libtool
-After @code{make check}: do not copy the programs' executables to another
-(for example, the installation) directory manually (using @command{cp}, or
-@command{mv} for example). In the default configuration@footnote{If you
-configure Gnuastro with the @option{--disable-shared} option, then the
-libraries will be statically linked to the programs and this problem won't
-exist, see @ref{Linking}.}, the program binaries need to link with
-Gnuastro's shared library which is also built and installed with the
-programs. Therefore, to run successfully before and after installation,
-linking modifications need to be made by GNU Libtool at installation
-time. @command{make install} does this internally, but a simple copy might
-give linking errors when you run it. If you need to copy the executables,
-you can do so after installation.
+After @code{make check}: do not copy the programs' executables to another (for 
example, the installation) directory manually (using @command{cp}, or 
@command{mv} for example).
+In the default configuration@footnote{If you configure Gnuastro with the 
@option{--disable-shared} option, then the libraries will be statically linked 
to the programs and this problem won't exist, see @ref{Linking}.}, the program 
binaries need to link with Gnuastro's shared library which is also built and 
installed with the programs.
+Therefore, to run successfully before and after installation, linking 
modifications need to be made by GNU Libtool at installation time.
+@command{make install} does this internally, but a simple copy might give 
linking errors when you run it.
+If you need to copy the executables, you can do so after installation.
 
 @end itemize
 
 @noindent
-If your problem was not listed above, please file a bug report
-(@ref{Report a bug}).
+If your problem was not listed above, please file a bug report (@ref{Report a 
bug}).
 
 
 
@@ -7313,52 +5511,27 @@ If your problem was not listed above, please file a bug 
report
 @node Common program behavior, Data containers, Installation, Top
 @chapter Common program behavior
 
-All the programs in Gnuastro share a set of common behavior mainly to do
-with user interaction to facilitate their usage and development. This
-includes how to feed input datasets into the programs, how to configure
-them, specifying the outputs, numerical data types, treating columns of
-information in tables and etc. This chapter is devoted to describing this
-common behavior in all programs. Because the behaviors discussed here are
-common to several programs, they are not repeated in each program's
-description.
-
-In @ref{Command-line}, a very general description of running the programs
-on the command-line is discussed, like difference between arguments and
-options, as well as options that are common/shared between all
-programs. None of Gnuastro's programs keep any internal configuration value
-(values for their different operational steps), they read their
-configuration primarily from the command-line, then from specific files in
-directory, user, or system-wide settings. Using these configuration files
-can greatly help reproducible and robust usage of Gnuastro, see
-@ref{Configuration files} for more.
-
-It is not possible to always have the different options and configurations
-of each program on the top of your head. It is very natural to forget the
-options of a program, their current default values, or how it should be run
-and what it did. Gnuastro's programs have multiple ways to help you refresh
-your memory in multiple levels (just an option name, a short description,
-or fast access to the relevant section of the manual. See @ref{Getting
-help} for more for more on benefiting from this very convenient feature.
-
-Many of the programs use the multi-threaded character of modern CPUs, in
-@ref{Multi-threaded operations} we'll discuss how you can configure this
-behavior, along with some tips on making best use of them. In @ref{Numeric
-data types}, we'll review the various types to store numbers in your
-datasets: setting the proper type for the usage context@footnote{For
-example if the values in your dataset can only be integers between 0 or
-65000, store them in a unsigned 16-bit type, not 64-bit floating point type
-(which is the default in most systems). It takes four times less space and
-is much faster to process.} can greatly improve the file size and also speed
-of reading, writing or processing them.
-
-We'll then look into the recognized table formats in @ref{Tables} and how
-large datasets are broken into tiles, or mesh grid in
-@ref{Tessellation}. Finally, we'll take a look at the behavior regarding
-output files: @ref{Automatic output} describes how the programs set a
-default name for their output when you don't give one explicitly (using
-@option{--output}). When the output is a FITS file, all the programs also
-store some very useful information in the header that is discussed in
-@ref{Output FITS files}.
+All the programs in Gnuastro share a set of common behavior mainly to do with 
user interaction to facilitate their usage and development.
+This includes how to feed input datasets into the programs, how to configure 
them, specifying the outputs, numerical data types, treating columns of 
information in tables, etc.
+This chapter is devoted to describing this common behavior in all programs.
+Because the behaviors discussed here are common to several programs, they are 
not repeated in each program's description.
+
+In @ref{Command-line}, a very general description of running the programs on 
the command-line is discussed, like difference between arguments and options, 
as well as options that are common/shared between all programs.
+None of Gnuastro's programs keep any internal configuration value (values for 
their different operational steps), they read their configuration primarily 
from the command-line, then from specific files in directory, user, or 
system-wide settings.
+Using these configuration files can greatly help reproducible and robust usage 
of Gnuastro, see @ref{Configuration files} for more.
+
+It is not possible to always have the different options and configurations of 
each program on the top of your head.
+It is very natural to forget the options of a program, their current default 
values, or how it should be run and what it did.
+Gnuastro's programs have multiple ways to help you refresh your memory in 
multiple levels (just an option name, a short description, or fast access to 
the relevant section of the manual.
+See @ref{Getting help} for more for more on benefiting from this very 
convenient feature.
+
+Many of the programs use the multi-threaded character of modern CPUs, in 
@ref{Multi-threaded operations} we'll discuss how you can configure this 
behavior, along with some tips on making best use of them.
+In @ref{Numeric data types}, we'll review the various types to store numbers 
in your datasets: setting the proper type for the usage context@footnote{For 
example if the values in your dataset can only be integers between 0 or 65000, 
store them in a unsigned 16-bit type, not 64-bit floating point type (which is 
the default in most systems).
+It takes four times less space and is much faster to process.} can greatly 
improve the file size and also speed of reading, writing or processing them.
+
+We'll then look into the recognized table formats in @ref{Tables} and how 
large datasets are broken into tiles, or mesh grid in @ref{Tessellation}.
+Finally, we'll take a look at the behavior regarding output files: 
@ref{Automatic output} describes how the programs set a default name for their 
output when you don't give one explicitly (using @option{--output}).
+When the output is a FITS file, all the programs also store some very useful 
information in the header that is discussed in @ref{Output FITS files}.
 
 @menu
 * Command-line::                How to use the command-line.
@@ -7376,13 +5549,10 @@ store some very useful information in the header that 
is discussed in
 @node Command-line, Configuration files, Common program behavior, Common 
program behavior
 @section Command-line
 
-Gnuastro's programs are customized through the standard Unix-like
-command-line environment and GNU style command-line options. Both are very
-common in many Unix-like operating system programs. In @ref{Arguments and
-options} we'll start with the difference between arguments and options and
-elaborate on the GNU style of options. Afterwards, in @ref{Common options},
-we'll go into the detailed list of all the options that are common to all
-the programs in Gnuastro.
+Gnuastro's programs are customized through the standard Unix-like command-line 
environment and GNU style command-line options.
+Both are very common in many Unix-like operating system programs.
+In @ref{Arguments and options} we'll start with the difference between 
arguments and options and elaborate on the GNU style of options.
+Afterwards, in @ref{Common options}, we'll go into the detailed list of all 
the options that are common to all the programs in Gnuastro.
 
 @menu
 * Arguments and options::       Different ways to specify inputs and 
configuration.
@@ -7398,73 +5568,47 @@ the programs in Gnuastro.
 @cindex Command-line options
 @cindex Arguments to programs
 @cindex Command-line arguments
-When you type a command on the command-line, it is passed onto the shell (a
-generic name for the program that manages the command-line) as a string of
-characters. As an example, see the ``Invoking ProgramName'' sections in
-this manual for some examples of commands with each program, like
-@ref{Invoking asttable}, @ref{Invoking astfits}, or @ref{Invoking
-aststatistics}.
-
-The shell then brakes up your string into separate @emph{tokens} or
-@emph{words} using any @emph{metacharacters} (like white-space, tab,
-@command{|}, @command{>} or @command{;}) that are in the string. On the
-command-line, the first thing you usually enter is the name of the program
-you want to run. After that, you can specify two types of tokens:
-@emph{arguments} and @emph{options}. In the GNU-style, arguments are those
-tokens that are not preceded by any hyphens (@command{-}, see
-@ref{Arguments}). Here is one example:
+When you type a command on the command-line, it is passed onto the shell (a 
generic name for the program that manages the command-line) as a string of 
characters.
+As an example, see the ``Invoking ProgramName'' sections in this manual for 
some examples of commands with each program, like @ref{Invoking asttable}, 
@ref{Invoking astfits}, or @ref{Invoking aststatistics}.
+
+The shell then brakes up your string into separate @emph{tokens} or 
@emph{words} using any @emph{metacharacters} (like white-space, tab, 
@command{|}, @command{>} or @command{;}) that are in the string.
+On the command-line, the first thing you usually enter is the name of the 
program you want to run.
+After that, you can specify two types of tokens: @emph{arguments} and 
@emph{options}.
+In the GNU-style, arguments are those tokens that are not preceded by any 
hyphens (@command{-}, see @ref{Arguments}).
+Here is one example:
 
 @example
 $ astcrop --center=53.162551,-27.789676 -w10/3600 --mode=wcs udf.fits
 @end example
 
-In the example above, we are running @ref{Crop} to crop a region of width
-10 arc-seconds centered at the given RA and Dec from the input Hubble
-Ultra-Deep Field (UDF) FITS image. Here, the argument is
-@file{udf.fits}. Arguments are most commonly the input file names
-containing your data. Options start with one or two hyphens, followed by an
-identifier for the option (the option's name, for example,
-@option{--center}, @option{-w}, @option{--mode} in the example above) and
-its value (anything after the option name, or the optional @key{=}
-character). Through options you can configure how the program runs
-(interprets the data you provided).
+In the example above, we are running @ref{Crop} to crop a region of width 10 
arc-seconds centered at the given RA and Dec from the input Hubble Ultra-Deep 
Field (UDF) FITS image.
+Here, the argument is @file{udf.fits}.
+Arguments are most commonly the input file names containing your data.
+Options start with one or two hyphens, followed by an identifier for the 
option (the option's name, for example, @option{--center}, @option{-w}, 
@option{--mode} in the example above) and its value (anything after the option 
name, or the optional @key{=} character).
+Through options you can configure how the program runs (interprets the data 
you provided).
 
 @vindex --help
 @vindex --usage
 @cindex Mandatory arguments
-Arguments can be mandatory and optional and unlike options, they don't have
-any identifiers. Hence, when there multiple arguments, their order might
-also matter (for example in @command{cp} which is used for copying one file
-to another location). The outputs of @option{--usage} and @option{--help}
-shows which arguments are optional and which are mandatory, see
-@ref{--usage}.
-
-As their name suggests, @emph{options} can be considered to be optional and
-most of the time, you don't have to worry about what order you specify them
-in. When the order does matter, or the option can be invoked multiple
-times, it is explicitly mentioned in the ``Invoking ProgramName'' section
-of each program (this is a very important aspect of an option).
-
-@cindex Metacharacters on the command-line
-In case your arguments or option values contain any of the shell's
-meta-characters, you have to quote them. If there is only one such
-character, you can use a backslash (@command{\}) before it. If there are
-multiple, it might be easier to simply put your whole argument or option
-value inside of double quotes (@command{"}). In such cases, everything
-inside the double quotes will be seen as one token or word.
+Arguments can be mandatory and optional and unlike options, they don't have 
any identifiers.
+Hence, when there multiple arguments, their order might also matter (for 
example in @command{cp} which is used for copying one file to another location).
+The outputs of @option{--usage} and @option{--help} shows which arguments are 
optional and which are mandatory, see @ref{--usage}.
+
+As their name suggests, @emph{options} can be considered to be optional and 
most of the time, you don't have to worry about what order you specify them in.
+When the order does matter, or the option can be invoked multiple times, it is 
explicitly mentioned in the ``Invoking ProgramName'' section of each program 
(this is a very important aspect of an option).
+
+@cindex Metacharacters on the command-line In case your arguments or option 
values contain any of the shell's meta-characters, you have to quote them.
+If there is only one such character, you can use a backslash (@command{\}) 
before it.
+If there are multiple, it might be easier to simply put your whole argument or 
option value inside of double quotes (@command{"}).
+In such cases, everything inside the double quotes will be seen as one token 
or word.
 
 @cindex HDU
 @cindex Header data unit
-For example, let's say you want to specify the header data unit (HDU) of
-your FITS file using a complex expression like `@command{3; images(exposure
-> 100)}'. If you simply add these after the @option{--hdu} (@option{-h})
-option, the programs in Gnuastro will read the value to the HDU option as
-`@command{3}' and run. Then, the shell will attempt to run a separate
-command `@command{images(exposure > 100)}' and complain about a syntax
-error. This is because the semicolon (@command{;}) is an `end of command'
-character in the shell. To solve this problem you can simply put double
-quotes around the whole string you want to pass to @option{--hdu} as seen
-below:
+For example, let's say you want to specify the header data unit (HDU) of your 
FITS file using a complex expression like `@command{3; images(exposure > 100)}'.
+If you simply add these after the @option{--hdu} (@option{-h}) option, the 
programs in Gnuastro will read the value to the HDU option as `@command{3}' and 
run.
+Then, the shell will attempt to run a separate command 
`@command{images(exposure > 100)}' and complain about a syntax error.
+This is because the semicolon (@command{;}) is an `end of command' character 
in the shell.
+To solve this problem you can simply put double quotes around the whole string 
you want to pass to @option{--hdu} as seen below:
 @example
 $ astcrop --hdu="3; images(exposure > 100)" image.fits
 @end example
@@ -7480,21 +5624,14 @@ $ astcrop --hdu="3; images(exposure > 100)" image.fits
 
 @node Arguments, Options, Arguments and options, Arguments and options
 @subsubsection Arguments
-In Gnuastro, arguments are almost exclusively used as the input data file
-names. Please consult the first few paragraph of the ``Invoking
-ProgramName'' section for each program for a description of what it expects
-as input, how many arguments, or input data, it accepts, or in what
-order. Everything particular about how a program treats arguments, is
-explained under the ``Invoking ProgramName'' section for that program.
-
-Generally, if there is a standard file name extension for a particular
-format, that filename extension is used to separate the kinds of
-arguments. The list below shows the data formats that are recognized in
-Gnuastro's programs based on their file name endings. Any argument that
-doesn't end with the specified extensions below is considered to be a text
-file (usually catalogs, see @ref{Tables}). In some cases, a program
-can accept specific formats, for example @ref{ConvertType} also accepts
-@file{.jpg} images.
+In Gnuastro, arguments are almost exclusively used as the input data file 
names.
+Please consult the first few paragraph of the ``Invoking ProgramName'' section 
for each program for a description of what it expects as input, how many 
arguments, or input data, it accepts, or in what order.
+Everything particular about how a program treats arguments, is explained under 
the ``Invoking ProgramName'' section for that program.
+
+Generally, if there is a standard file name extension for a particular format, 
that filename extension is used to separate the kinds of arguments.
+The list below shows the data formats that are recognized in Gnuastro's 
programs based on their file name endings.
+Any argument that doesn't end with the specified extensions below is 
considered to be a text file (usually catalogs, see @ref{Tables}).
+In some cases, a program can accept specific formats, for example 
@ref{ConvertType} also accepts @file{.jpg} images.
 
 @cindex Astronomical data suffixes
 @cindex Suffixes, astronomical data
@@ -7520,14 +5657,9 @@ can accept specific formats, for example 
@ref{ConvertType} also accepts
 
 @end itemize
 
-Through out this book and in the command-line outputs, whenever we
-want to generalize all such astronomical data formats in a text place
-holder, we will use @file{ASTRdata}, we will assume that the extension
-is also part of this name. Any file ending with these names is
-directly passed on to CFITSIO to read. Therefore you don't necessarily
-have to have these files on your computer, they can also be located on
-an FTP or HTTP server too, see the CFITSIO manual for more
-information.
+Through out this book and in the command-line outputs, whenever we want to 
generalize all such astronomical data formats in a text place holder, we will 
use @file{ASTRdata}, we will assume that the extension is also part of this 
name.
+Any file ending with these names is directly passed on to CFITSIO to read.
+Therefore you don't necessarily have to have these files on your computer, 
they can also be located on an FTP or HTTP server too, see the CFITSIO manual 
for more information.
 
 CFITSIO has its own error reporting techniques, if your input file(s)
 cannot be opened, or read, those errors will be printed prior to the
@@ -7541,35 +5673,26 @@ final error by Gnuastro.
 @cindex GNU style options
 @cindex Options, GNU style
 @cindex Options, short (@option{-}) and long (@option{--})
-Command-line options allow configuring the behavior of a program in all
-GNU/Linux applications for each particular execution on a particular input
-data. A single option can be called in two ways: @emph{long} or
-@emph{short}. All options in Gnuastro accept the long format which has two
-hyphens an can have many characters (for example @option{--hdu}). Short
-options only have one hyphen (@key{-}) followed by one character (for
-example @option{-h}). You can see some examples in the list of options in
-@ref{Common options} or those for each program's ``Invoking ProgramName''
-section. Both formats are shown for those which support both. First the
-short is shown then the long.
-
-Usually, the short options are for when you are writing on the command-line
-and want to save keystrokes and time. The long options are good for shell
-scripts, where you aren't usually rushing. Long options provide a level of
-documentation, since they are more descriptive and less cryptic. Usually
-after a few months of not running a program, the short options will be
-forgotten and reading your previously written script will not be easy.
+Command-line options allow configuring the behavior of a program in all 
GNU/Linux applications for each particular execution on a particular input data.
+A single option can be called in two ways: @emph{long} or @emph{short}.
+All options in Gnuastro accept the long format which has two hyphens an can 
have many characters (for example @option{--hdu}).
+Short options only have one hyphen (@key{-}) followed by one character (for 
example @option{-h}).
+You can see some examples in the list of options in @ref{Common options} or 
those for each program's ``Invoking ProgramName'' section.
+Both formats are shown for those which support both.
+First the short is shown then the long.
+
+Usually, the short options are for when you are writing on the command-line 
and want to save keystrokes and time.
+The long options are good for shell scripts, where you aren't usually rushing.
+Long options provide a level of documentation, since they are more descriptive 
and less cryptic.
+Usually after a few months of not running a program, the short options will be 
forgotten and reading your previously written script will not be easy.
 
 @cindex On/Off options
 @cindex Options, on/off
-Some options need to be given a value if they are called and some
-don't. You can think of the latter type of options as on/off options. These
-two types of options can be distinguished using the output of the
-@option{--help} and @option{--usage} options, which are common to all GNU
-software, see @ref{Getting help}. In Gnuastro we use the following strings
-to specify when the option needs a value and what format that value should
-be in. More specific tests will be done in the program and if the values
-are out of range (for example negative when the program only wants a
-positive value), an error will be reported.
+Some options need to be given a value if they are called and some don't.
+You can think of the latter type of options as on/off options.
+These two types of options can be distinguished using the output of the 
@option{--help} and @option{--usage} options, which are common to all GNU 
software, see @ref{Getting help}.
+In Gnuastro we use the following strings to specify when the option needs a 
value and what format that value should be in.
+More specific tests will be done in the program and if the values are out of 
range (for example negative when the program only wants a positive value), an 
error will be reported.
 
 @vtable @option
 
@@ -7577,9 +5700,9 @@ positive value), an error will be reported.
 The value is read as an integer.
 
 @item FLT
-The value is read as a float. There are generally two types, depending
-on the context. If they are for fractions, they will have to be less
-than or equal to unity.
+The value is read as a float.
+There are generally two types, depending on the context.
+If they are for fractions, they will have to be less than or equal to unity.
 
 @item STR
 The value is read as a string of characters (for example a file name)
@@ -7590,96 +5713,66 @@ or other particular settings like a HDU name, see below.
 @noindent
 @cindex Values to options
 @cindex Option values
-To specify a value in the short format, simply put the value after the
-option. Note that since the short options are only one character long,
-you don't have to type anything between the option and its value. For
-the long option you either need white space or an @option{=} sign, for
-example @option{-h2}, @option{-h 2}, @option{--hdu 2} or
-@option{--hdu=2} are all equivalent.
-
-The short format of on/off options (those that don't need values) can be
-concatenated for example these two hypothetical sequences of options are
-equivalent: @option{-a -b -c4} and @option{-abc4}.  As an example, consider
-the following command to run Crop:
+To specify a value in the short format, simply put the value after the option.
+Note that since the short options are only one character long, you don't have 
to type anything between the option and its value.
+For the long option you either need white space or an @option{=} sign, for 
example @option{-h2}, @option{-h 2}, @option{--hdu 2} or @option{--hdu=2} are 
all equivalent.
+
+The short format of on/off options (those that don't need values) can be 
concatenated for example these two hypothetical sequences of options are 
equivalent: @option{-a -b -c4} and @option{-abc4}.
+As an example, consider the following command to run Crop:
 @example
 $ astcrop -Dr3 --wwidth 3 catalog.txt --deccol=4 ASTRdata
 @end example
 @noindent
-The @command{$} is the shell prompt, @command{astcrop} is the
-program name. There are two arguments (@command{catalog.txt} and
-@command{ASTRdata}) and four options, two of them given in short
-format (@option{-D}, @option{-r}) and two in long format
-(@option{--width} and @option{--deccol}). Three of them require a
-value and one (@option{-D}) is an on/off option.
+The @command{$} is the shell prompt, @command{astcrop} is the program name.
+There are two arguments (@command{catalog.txt} and @command{ASTRdata}) and 
four options, two of them given in short format (@option{-D}, @option{-r}) and 
two in long format (@option{--width} and @option{--deccol}).
+Three of them require a value and one (@option{-D}) is an on/off option.
 
 @vindex --printparams
 @cindex Options, abbreviation
 @cindex Long option abbreviation
-If an abbreviation is unique between all the options of a program, the
-long option names can be abbreviated. For example, instead of typing
-@option{--printparams}, typing @option{--print} or maybe even
-@option{--pri} will be enough, if there are conflicts, the program
-will warn you and show you the alternatives. Finally, if you want the
-argument parser to stop parsing arguments beyond a certain point, you
-can use two dashes: @option{--}. No text on the command-line beyond
-these two dashes will be parsed.
+If an abbreviation is unique between all the options of a program, the long 
option names can be abbreviated.
+For example, instead of typing @option{--printparams}, typing @option{--print} 
or maybe even @option{--pri} will be enough, if there are conflicts, the 
program will warn you and show you the alternatives.
+Finally, if you want the argument parser to stop parsing arguments beyond a 
certain point, you can use two dashes: @option{--}.
+No text on the command-line beyond these two dashes will be parsed.
 
 @cindex Repeated options
 @cindex Options, repeated
-Gnuastro has two types of options with values, those that only take a
-single value are the most common type. If these options are repeated or
-called more than once on the command-line, the value of the last time it
-was called will be assigned to it. This is very useful when you are
-testing/experimenting. Let's say you want to make a small modification to
-one option value. You can simply type the option with a new value in the
-end of the command and see how the script works. If you are satisfied with
-the change, you can remove the original option for human readability. If
-the change wasn't satisfactory, you can remove the one you just added and
-not worry about forgetting the original value. Without this capability, you
-would have to memorize or save the original value somewhere else, run the
-command and then change the value again which is not at all convenient and
-is potentially cause lots of bugs.
-
-On the other hand, some options can be called multiple times in one run of
-a program and can thus take multiple values (for example see the
-@option{--column} option in @ref{Invoking asttable}. In these cases, the
-order of stored values is the same order that you specified on the
-command-line.
+Gnuastro has two types of options with values, those that only take a single 
value are the most common type.
+If these options are repeated or called more than once on the command-line, 
the value of the last time it was called will be assigned to it.
+This is very useful when you are testing/experimenting.
+Let's say you want to make a small modification to one option value.
+You can simply type the option with a new value in the end of the command and 
see how the script works.
+If you are satisfied with the change, you can remove the original option for 
human readability.
+If the change wasn't satisfactory, you can remove the one you just added and 
not worry about forgetting the original value.
+Without this capability, you would have to memorize or save the original value 
somewhere else, run the command and then change the value again which is not at 
all convenient and is potentially cause lots of bugs.
+
+On the other hand, some options can be called multiple times in one run of a 
program and can thus take multiple values (for example see the 
@option{--column} option in @ref{Invoking asttable}.
+In these cases, the order of stored values is the same order that you 
specified on the command-line.
 
 @cindex Configuration files
 @cindex Default option values
-Gnuastro's programs don't keep any internal default values, so some options
-are mandatory and if they don't have a value, the program will complain and
-abort. Most programs have many such options and typing them by hand on
-every call is impractical. To facilitate the user experience, after parsing
-the command-line, Gnuastro's programs read special configuration files to
-get the necessary values for the options you haven't identified on the
-command-line. These configuration files are fully described in
-@ref{Configuration files}.
+Gnuastro's programs don't keep any internal default values, so some options 
are mandatory and if they don't have a value, the program will complain and 
abort.
+Most programs have many such options and typing them by hand on every call is 
impractical.
+To facilitate the user experience, after parsing the command-line, Gnuastro's 
programs read special configuration files to get the necessary values for the 
options you haven't identified on the command-line.
+These configuration files are fully described in @ref{Configuration files}.
 
 @cartouche
 @noindent
 @cindex Tilde expansion as option values
-@strong{CAUTION:} In specifying a file address, if you want to use the
-shell's tilde expansion (@command{~}) to specify your home directory,
-leave at least one space between the option name and your value. For
-example use @command{-o ~/test}, @command{--output ~/test} or
-@command{--output= ~/test}. Calling them with @command{-o~/test} or
-@command{--output=~/test} will disable shell expansion.
+@strong{CAUTION:} In specifying a file address, if you want to use the shell's 
tilde expansion (@command{~}) to specify your home directory, leave at least 
one space between the option name and your value.
+For example use @command{-o ~/test}, @command{--output ~/test} or 
@command{--output= ~/test}.
+Calling them with @command{-o~/test} or @command{--output=~/test} will disable 
shell expansion.
 @end cartouche
 @cartouche
 @noindent
-@strong{CAUTION:} If you forget to specify a value for an option which
-requires one, and that option is the last one, Gnuastro will warn you. But
-if it is in the middle of the command, it will take the text of the next
-option or argument as the value which can cause undefined behavior.
+@strong{CAUTION:} If you forget to specify a value for an option which 
requires one, and that option is the last one, Gnuastro will warn you.
+But if it is in the middle of the command, it will take the text of the next 
option or argument as the value which can cause undefined behavior.
 @end cartouche
 @cartouche
 @noindent
 @cindex Counting from zero.
-@strong{NOTE:} In some contexts Gnuastro's counting starts from 0 and in
-others 1. You can assume by default that counting starts from 1, if it
-starts from 0 for a special option, it will be explicitly mentioned.
+@strong{NOTE:} In some contexts Gnuastro's counting starts from 0 and in 
others 1.
+You can assume by default that counting starts from 1, if it starts from 0 for 
a special option, it will be explicitly mentioned.
 @end cartouche
 
 @node Common options, Standard input, Arguments and options, Command-line
@@ -7687,14 +5780,10 @@ starts from 0 for a special option, it will be 
explicitly mentioned.
 
 @cindex Options common to all programs
 @cindex Gnuastro common options
-To facilitate the job of the users and developers, all the programs in
-Gnuastro share some basic command-line options for the options that are
-common to many of the programs. The full list is classified as @ref{Input
-output options}, @ref{Processing options}, and @ref{Operating mode
-options}. In some programs, some of the options are irrelevant, but still
-recognized (you won't get an unrecognized option error, but the value isn't
-used). Unless otherwise mentioned, these options are identical between all
-programs.
+To facilitate the job of the users and developers, all the programs in 
Gnuastro share some basic command-line options for the options that are common 
to many of the programs.
+The full list is classified as @ref{Input output options}, @ref{Processing 
options}, and @ref{Operating mode options}.
+In some programs, some of the options are irrelevant, but still recognized 
(you won't get an unrecognized option error, but the value isn't used).
+Unless otherwise mentioned, these options are identical between all programs.
 
 @menu
 * Input output options::        Common input/output options.
@@ -7713,110 +5802,78 @@ programs.
 @cindex Timeout
 @cindex Standard input
 @item --stdintimeout
-Number of micro-seconds to wait for writing/typing in the @emph{first line}
-of standard input from the command-line (see @ref{Standard input}). This is
-only relevant for programs that also accept input from the standard input,
-@emph{and} you want to manually write/type the contents on the
-terminal. When the standard input is already connected to a pipe (output of
-another program), there won't be any waiting (hence no timeout, thus making
-this option redundant).
-
-If the first line-break (for example with the @key{ENTER} key) is not
-provided before the timeout, the program will abort with an error that no
-input was given. Note that this time interval is @emph{only} for the first
-line that you type. Once the first line is given, the program will assume
-that more data will come and accept rest of your inputs without any time
-limit. You need to specify the ending of the standard input, for example by
-pressing @key{CTRL-D} after a new line.
-
-Note that any input you write/type into a program on the command-line with
-Standard input will be discarded (lost) once the program is finished. It is
-only recoverable manually from your command-line (where you actually typed)
-as long as the terminal is open. So only use this feature when you are sure
-that you don't need the dataset (or have a copy of it somewhere else).
+Number of micro-seconds to wait for writing/typing in the @emph{first line} of 
standard input from the command-line (see @ref{Standard input}).
+This is only relevant for programs that also accept input from the standard 
input, @emph{and} you want to manually write/type the contents on the terminal.
+When the standard input is already connected to a pipe (output of another 
program), there won't be any waiting (hence no timeout, thus making this option 
redundant).
+
+If the first line-break (for example with the @key{ENTER} key) is not provided 
before the timeout, the program will abort with an error that no input was 
given.
+Note that this time interval is @emph{only} for the first line that you type.
+Once the first line is given, the program will assume that more data will come 
and accept rest of your inputs without any time limit.
+You need to specify the ending of the standard input, for example by pressing 
@key{CTRL-D} after a new line.
+
+Note that any input you write/type into a program on the command-line with 
Standard input will be discarded (lost) once the program is finished.
+It is only recoverable manually from your command-line (where you actually 
typed) as long as the terminal is open.
+So only use this feature when you are sure that you don't need the dataset (or 
have a copy of it somewhere else).
 
 
 @cindex HDU
 @cindex Header data unit
 @item -h STR/INT
 @itemx --hdu=STR/INT
-The name or number of the desired Header Data Unit, or HDU, in the FITS
-image. A FITS file can store multiple HDUs or extensions, each with either
-an image or a table or nothing at all (only a header). Note that counting
-of the extensions starts from 0(zero), not 1(one). Counting from 0 is
-forced on us by CFITSIO which directly reads the value you give with this
-option (see @ref{CFITSIO}). When specifying the name, case is not important
-so @command{IMAGE}, @command{image} or @command{ImAgE} are equivalent.
-
-CFITSIO has many capabilities to help you find the extension you want, far
-beyond the simple extension number and name. See CFITSIO manual's ``HDU
-Location Specification'' section for a very complete explanation with
-several examples. A @code{#} is appended to the string you specify for the
-HDU@footnote{With the @code{#} character, CFITSIO will only read the
-desired HDU into your memory, not all the existing HDUs in the fits file.}
-and the result is put in square brackets and appended to the FITS file name
-before calling CFITSIO to read the contents of the HDU for all the programs
-in Gnuastro.
+The name or number of the desired Header Data Unit, or HDU, in the FITS image.
+A FITS file can store multiple HDUs or extensions, each with either an image 
or a table or nothing at all (only a header).
+Note that counting of the extensions starts from 0(zero), not 1(one).
+Counting from 0 is forced on us by CFITSIO which directly reads the value you 
give with this option (see @ref{CFITSIO}).
+When specifying the name, case is not important so @command{IMAGE}, 
@command{image} or @command{ImAgE} are equivalent.
+
+CFITSIO has many capabilities to help you find the extension you want, far 
beyond the simple extension number and name.
+See CFITSIO manual's ``HDU Location Specification'' section for a very 
complete explanation with several examples.
+A @code{#} is appended to the string you specify for the HDU@footnote{With the 
@code{#} character, CFITSIO will only read the desired HDU into your memory, 
not all the existing HDUs in the fits file.} and the result is put in square 
brackets and appended to the FITS file name before calling CFITSIO to read the 
contents of the HDU for all the programs in Gnuastro.
 
 @item -s STR
 @itemx --searchin=STR
-Where to match/search for columns when the column identifier wasn't a
-number, see @ref{Selecting table columns}. The acceptable values are
-@command{name}, @command{unit}, or @command{comment}. This option is only
-relevant for programs that take table columns as input.
+Where to match/search for columns when the column identifier wasn't a number, 
see @ref{Selecting table columns}.
+The acceptable values are @command{name}, @command{unit}, or @command{comment}.
+This option is only relevant for programs that take table columns as input.
 
 @item -I
 @itemx --ignorecase
-Ignore case while matching/searching column meta-data (in the field
-specified by the @option{--searchin}). The FITS standard suggests to treat
-the column names as case insensitive, which is strongly recommended here
-also but is not enforced. This option is only relevant for programs that
-take table columns as input.
+Ignore case while matching/searching column meta-data (in the field specified 
by the @option{--searchin}).
+The FITS standard suggests to treat the column names as case insensitive, 
which is strongly recommended here also but is not enforced.
+This option is only relevant for programs that take table columns as input.
 
-This option is not relevant to @ref{BuildProgram}, hence in that program the
-short option @option{-I} is used for include directories, not to ignore
-case.
+This option is not relevant to @ref{BuildProgram}, hence in that program the 
short option @option{-I} is used for include directories, not to ignore case.
 
 @item -o STR
 @itemx --output=STR
-The name of the output file or directory. With this option the automatic
-output names explained in @ref{Automatic output} are ignored.
+The name of the output file or directory. With this option the automatic 
output names explained in @ref{Automatic output} are ignored.
 
 @item -T STR
 @itemx --type=STR
-The data type of the output depending on the program context. This option
-isn't applicable to some programs like @ref{Fits} and will be ignored by
-them. The different acceptable values to this option are fully described in
-@ref{Numeric data types}.
+The data type of the output depending on the program context.
+This option isn't applicable to some programs like @ref{Fits} and will be 
ignored by them.
+The different acceptable values to this option are fully described in 
@ref{Numeric data types}.
 
 @item -D
 @itemx --dontdelete
-By default, if the output file already exists, Gnuastro's programs will
-silently delete it and put their own outputs in its place. When this option
-is activated, if the output file already exists, the programs will not
-delete it, will warn you, and will abort.
+By default, if the output file already exists, Gnuastro's programs will 
silently delete it and put their own outputs in its place.
+When this option is activated, if the output file already exists, the programs 
will not delete it, will warn you, and will abort.
 
 @item -K
 @itemx --keepinputdir
-In automatic output names, don't remove the directory information of the
-input file names. As explained in @ref{Automatic output}, if no output name
-is specified (with @option{--output}), then the output name will be made in
-the existing directory based on your input's file name (ignoring the
-directory of the input). If you call this option, the directory information
-of the input will be kept and the automatically generated output name will
-be in the same directory as the input (usually with a suffix added). Note
-that his is only relevant if you are running the program in a different
-directory than the input data.
+In automatic output names, don't remove the directory information of the input 
file names.
+As explained in @ref{Automatic output}, if no output name is specified (with 
@option{--output}), then the output name will be made in the existing directory 
based on your input's file name (ignoring the directory of the input).
+If you call this option, the directory information of the input will be kept 
and the automatically generated output name will be in the same directory as 
the input (usually with a suffix added).
+Note that his is only relevant if you are running the program in a different 
directory than the input data.
 
 @item -t STR
 @itemx --tableformat=STR
-The output table's type. This option is only relevant when the output is a
-table and its format cannot be deduced from its filename. For example, if a
-name ending in @file{.fits} was given to @option{--output}, then the
-program knows you want a FITS table. But there are two types of FITS
-tables: FITS ASCII, and FITS binary. Thus, with this option, the program is
-able to identify which type you want. The currently recognized values to
-this option are:
+The output table's type.
+This option is only relevant when the output is a table and its format cannot 
be deduced from its filename.
+For example, if a name ending in @file{.fits} was given to @option{--output}, 
then the program knows you want a FITS table.
+But there are two types of FITS tables: FITS ASCII, and FITS binary.
+Thus, with this option, the program is able to identify which type you want.
+The currently recognized values to this option are:
 
 @table @command
 @item txt
@@ -7834,66 +5891,44 @@ A FITS binary table (see @ref{Recognized table 
formats}).
 @node Processing options, Operating mode options, Input output options, Common 
options
 @subsubsection Processing options
 
-Some processing steps are common to several programs, so they are defined
-as common options to all programs. Note that this class of common options
-is thus necessarily less common between all the programs than those
-described in @ref{Input output options}, or @ref{Operating mode options}
-options. Also, if they are irrelevant for a program, these options will not
-display in the @option{--help} output of the program.
+Some processing steps are common to several programs, so they are defined as 
common options to all programs.
+Note that this class of common options is thus necessarily less common between 
all the programs than those described in @ref{Input output options}, or 
@ref{Operating mode options} options.
+Also, if they are irrelevant for a program, these options will not display in 
the @option{--help} output of the program.
 
 @table @option
 
 @item --minmapsize=INT
-The minimum size (in bytes) to store the contents of each main processing
-array of a program as a file (on the non-volatile HDD/SSD), not in
-RAM. This can be very useful when you have limited RAM, but need to process
-large datasets which can be very memory intensive. In such scenarios,
-without this option, the program will crash.
-
-A random filename is assigned to the array. This file will keep the
-contents of the array as long as it is necessary and the program will
-delete it as soon as its not necessary any more.
-
-If the @file{.gnuastro} directory exists and is writable, then the random
-file will be placed in there. Otherwise, the @file{.gnuastro_mmap}
-directory will be checked. If @file{.gnuastro_mmap} does not exist, or
-@file{.gnuastro} is not writable, the random file will be directly written
-in the current directory with the @file{.gnuastro_mmap_} prefix.
-
-By default, the name of the created file, and its size (in bytes) is
-printed by the program when it is created and later, when its
-deleted/freed. These messages are useful to the user who has enough RAM,
-but has forgot to increase the value to @code{--minmapsize} (this is often
-the case). To supress/disable such messages, use the @code{--quietmmap}
-option.
+The minimum size (in bytes) to store the contents of each main processing 
array of a program as a file (on the non-volatile HDD/SSD), not in RAM.
+This can be very useful when you have limited RAM, but need to process large 
datasets which can be very memory intensive.
+In such scenarios, without this option, the program will crash.
+
+A random filename is assigned to the array.
+This file will keep the contents of the array as long as it is necessary and 
the program will delete it as soon as its not necessary any more.
+
+If the @file{.gnuastro} directory exists and is writable, then the random file 
will be placed in there.
+Otherwise, the @file{.gnuastro_mmap} directory will be checked.
+If @file{.gnuastro_mmap} does not exist, or @file{.gnuastro} is not writable, 
the random file will be directly written in the current directory with the 
@file{.gnuastro_mmap_} prefix.
+
+By default, the name of the created file, and its size (in bytes) is printed 
by the program when it is created and later, when its deleted/freed.
+These messages are useful to the user who has enough RAM, but has forgot to 
increase the value to @code{--minmapsize} (this is often the case).
+To suppress/disable such messages, use the @code{--quietmmap} option.
+
+When this option has a value of @code{0} (zero, strongly discouraged, see box 
below), all arrays that use this feature in a program will actually be placed 
in a file (not in RAM).
+When this option is larger than all the input datasets, all arrays will be 
definitely allocated in RAM and the program will run MUCH faster.
 
-When this option has a value of @code{0} (zero, strongly discouraged, see
-box below), all arrays that use this feature in a program will actually be
-placed in a file (not in RAM). When this option is larger than all the
-input datasets, all arrays will be definitely allocated in RAM and the
-program will run MUCH faster.
-
-Please note that using a non-volatile file (in the HDD/SDD) instead of RAM
-can significantly increase the program's running time, especially on HDDs
-(where read/write is slower). So it is best to give this option large
-values by default. You can then decrease it for a specific program's
-invocation on a large input after you see memory issues arise (for example
-an error, or the program not aborting and fully consuming your memory).
-
-The random file will be deleted once it is no longer needed by the
-program. The @file{.gnuastro} directory will also be deleted if it has no
-other contents (you may also have configuration files in this directory,
-see @ref{Configuration files}). If you see randomly named files remaining
-in this directory when the program finishes normally, please send us a bug
-report so we address the problem, see @ref{Report a bug}.
+Please note that using a non-volatile file (in the HDD/SDD) instead of RAM can 
significantly increase the program's running time, especially on HDDs (where 
read/write is slower).
+So it is best to give this option large values by default.
+You can then decrease it for a specific program's invocation on a large input 
after you see memory issues arise (for example an error, or the program not 
aborting and fully consuming your memory).
+
+The random file will be deleted once it is no longer needed by the program.
+The @file{.gnuastro} directory will also be deleted if it has no other 
contents (you may also have configuration files in this directory, see 
@ref{Configuration files}).
+If you see randomly named files remaining in this directory when the program 
finishes normally, please send us a bug report so we address the problem, see 
@ref{Report a bug}.
 
 @cartouche
 @noindent
-@strong{Limited number of memory-mapped files:} The operating system
-kernels usually support a limited number of memory-mapped files. Therefore
-never set @code{--minmapsize} to zero or a small number of bytes (so too
-many files are created). If the kernel capacity is exceeded, the program
-will crash.
+@strong{Limited number of memory-mapped files:} The operating system kernels 
usually support a limited number of memory-mapped files.
+Therefore never set @code{--minmapsize} to zero or a small number of bytes (so 
too many files are created).
+If the kernel capacity is exceeded, the program will crash.
 @end cartouche
 
 @item --quietmmap
@@ -7903,70 +5938,47 @@ for more.
 
 @item -Z INT[,INT[,...]]
 @itemx --tilesize=[,INT[,...]]
-The size of regular tiles for tessellation, see @ref{Tessellation}. For
-each dimension an integer length (in units of data-elements or pixels) is
-necessary. If the number of input dimensions is different from the number
-of values given to this option, the program will stop with an error. Values
-must be separated by commas (@key{,}) and can also be fractions (for
-example @code{4/2}). If they are fractions, the result must be an integer,
-otherwise an error will be printed.
+The size of regular tiles for tessellation, see @ref{Tessellation}.
+For each dimension an integer length (in units of data-elements or pixels) is 
necessary.
+If the number of input dimensions is different from the number of values given 
to this option, the program will stop with an error.
+Values must be separated by commas (@key{,}) and can also be fractions (for 
example @code{4/2}).
+If they are fractions, the result must be an integer, otherwise an error will 
be printed.
 
 @item -M INT[,INT[,...]]
 @itemx --numchannels=INT[,INT[,...]]
-The number of channels for larger input tessellation, see
-@ref{Tessellation}. The number and types of acceptable values are similar
-to @option{--tilesize}. The only difference is that instead of length, the
-integers values given to this option represent the @emph{number} of
-channels, not their size.
+The number of channels for larger input tessellation, see @ref{Tessellation}.
+The number and types of acceptable values are similar to @option{--tilesize}.
+The only difference is that instead of length, the integers values given to 
this option represent the @emph{number} of channels, not their size.
 
 @item -F FLT
 @itemx --remainderfrac=FLT
-The fraction of remainder size along all dimensions to add to the first
-tile. See @ref{Tessellation} for a complete description. This option is
-only relevant if @option{--tilesize} is not exactly divisible by the input
-dataset's size in a dimension. If the remainder size is larger than this
-fraction (compared to @option{--tilesize}), then the remainder size will be
-added with one regular tile size and divided between two tiles at the start
-and end of the given dimension.
+The fraction of remainder size along all dimensions to add to the first tile.
+See @ref{Tessellation} for a complete description.
+This option is only relevant if @option{--tilesize} is not exactly divisible 
by the input dataset's size in a dimension.
+If the remainder size is larger than this fraction (compared to 
@option{--tilesize}), then the remainder size will be added with one regular 
tile size and divided between two tiles at the start and end of the given 
dimension.
 
 @item --workoverch
-Ignore the channel borders for the high-level job of the given
-application. As a result, while the channel borders are respected in
-defining the small tiles (such that no tile will cross a channel border),
-the higher-level program operation will ignore them, see
-@ref{Tessellation}.
+Ignore the channel borders for the high-level job of the given application.
+As a result, while the channel borders are respected in defining the small 
tiles (such that no tile will cross a channel border), the higher-level program 
operation will ignore them, see @ref{Tessellation}.
 
 @item --checktiles
-Make a FITS file with the same dimensions as the input but each pixel is
-replaced with the ID of the tile that it is associated with. Note that the
-tile IDs start from 0. See @ref{Tessellation} for more on Tiling an image
-in Gnuastro.
+Make a FITS file with the same dimensions as the input but each pixel is 
replaced with the ID of the tile that it is associated with.
+Note that the tile IDs start from 0.
+See @ref{Tessellation} for more on Tiling an image in Gnuastro.
 
 @item --oneelempertile
-When showing the tile values (for example with @option{--checktiles}, or
-when the program's output is tessellated) only use one element for each
-tile. This can be useful when only the relative values given to each tile
-compared to the rest are important or need to be checked. Since the tiles
-usually have a large number of pixels within them the output will be much
-smaller, and so easier to read, write, store, or send.
-
-Note that when the full input size in any dimension is not exactly
-divisible by the given @option{--tilesize} in that dimension, the edge
-tile(s) will have different sizes (in units of the input's size), see
-@option{--remainderfrac}. But with this option, all displayed values are
-going to have the (same) size of one data-element. Hence, in such cases,
-the image proportions are going to be slightly different with this
-option.
+When showing the tile values (for example with @option{--checktiles}, or when 
the program's output is tessellated) only use one element for each tile.
+This can be useful when only the relative values given to each tile compared 
to the rest are important or need to be checked.
+Since the tiles usually have a large number of pixels within them the output 
will be much smaller, and so easier to read, write, store, or send.
 
-If your input image is not exactly divisible by the tile size and you want
-one value per tile for some higher-level processing, all is not lost
-though. You can see how many pixels were within each tile (for example to
-weight the values or discard some for later processing) with Gnuastro's
-Statistics (see @ref{Statistics}) as shown below. The output FITS file is
-going to have two extensions, one with the median calculated on each tile
-and one with the number of elements that each tile covers. You can then use
-the @code{where} operator in @ref{Arithmetic} to set the values of all
-tiles that don't have the regular area to a blank value.
+Note that when the full input size in any dimension is not exactly divisible 
by the given @option{--tilesize} in that dimension, the edge tile(s) will have 
different sizes (in units of the input's size), see @option{--remainderfrac}.
+But with this option, all displayed values are going to have the (same) size 
of one data-element.
+Hence, in such cases, the image proportions are going to be slightly different 
with this option.
+
+If your input image is not exactly divisible by the tile size and you want one 
value per tile for some higher-level processing, all is not lost though.
+You can see how many pixels were within each tile (for example to weight the 
values or discard some for later processing) with Gnuastro's Statistics (see 
@ref{Statistics}) as shown below.
+The output FITS file is going to have two extensions, one with the median 
calculated on each tile and one with the number of elements that each tile 
covers.
+You can then use the @code{where} operator in @ref{Arithmetic} to set the 
values of all tiles that don't have the regular area to a blank value.
 
 @example
 $ aststatistics --median --number --ontile input.fits    \
@@ -7990,16 +6002,13 @@ elements, keep the non-blank elements untouched.
 @cindex Taxicab metric
 @cindex Manhattan metric
 @cindex Metric: Manhattan, Taxicab, Radial
-The metric to use for finding nearest neighbors. Currently it only accepts
-the Manhattan (or taxicab) metric with @code{manhattan}, or the radial
-metric with @code{radial}.
+The metric to use for finding nearest neighbors.
+Currently it only accepts the Manhattan (or taxicab) metric with 
@code{manhattan}, or the radial metric with @code{radial}.
 
-The Manhattan distance between two points is defined with
-@mymath{|\Delta{x}|+|\Delta{y}|}. Thus the Manhattan metric has the
-advantage of being fast, but at the expense of being less accurate. The
-radial distance is the standard definition of distance in a Euclidean
-space: @mymath{\sqrt{\Delta{x}^2+\Delta{y}^2}}. It is accurate, but the
-multiplication and square root can slow down the processing.
+The Manhattan distance between two points is defined with 
@mymath{|\Delta{x}|+|\Delta{y}|}.
+Thus the Manhattan metric has the advantage of being fast, but at the expense 
of being less accurate.
+The radial distance is the standard definition of distance in a Euclidean 
space: @mymath{\sqrt{\Delta{x}^2+\Delta{y}^2}}.
+It is accurate, but the multiplication and square root can slow down the 
processing.
 
 @item --interpnumngb=INT
 The number of nearby non-blank neighbors to use for interpolation.
@@ -8008,145 +6017,97 @@ The number of nearby non-blank neighbors to use for 
interpolation.
 @node Operating mode options,  , Processing options, Common options
 @subsubsection Operating mode options
 
-Another group of options that are common to all the programs in Gnuastro
-are those to do with the general operation of the programs. The explanation
-for those that are not only limited to Gnuastro but are common to all GNU
-programs start with (GNU option).
+Another group of options that are common to all the programs in Gnuastro are 
those to do with the general operation of the programs.
+The explanation for those that are not only limited to Gnuastro but are common 
to all GNU programs start with (GNU option).
 
 @vtable @option
 
 @item --
-(GNU option) Stop parsing the command-line. This option can be useful in
-scripts or when using the shell history. Suppose you have a long list of
-options, and want to see if removing some of them (to read from
-configuration files, see @ref{Configuration files}) can give a better
-result. If the ones you want to remove are the last ones on the
-command-line, you don't have to delete them, you can just add @option{--}
-before them and if you don't get what you want, you can remove the
-@option{--} and get the same initial result.
+(GNU option) Stop parsing the command-line.
+This option can be useful in scripts or when using the shell history.
+Suppose you have a long list of options, and want to see if removing some of 
them (to read from configuration files, see @ref{Configuration files}) can give 
a better result.
+If the ones you want to remove are the last ones on the command-line, you 
don't have to delete them, you can just add @option{--} before them and if you 
don't get what you want, you can remove the @option{--} and get the same 
initial result.
 
 @item --usage
-(GNU option) Only print the options and arguments and abort. This is very
-useful for when you know the what the options do, and have just forgot
-their long/short identifiers, see @ref{--usage}.
+(GNU option) Only print the options and arguments and abort.
+This is very useful for when you know the what the options do, and have just 
forgot their long/short identifiers, see @ref{--usage}.
 
 @item -?
 @itemx --help
-(GNU option) Print all options with an explanation and abort. Adding this
-option will print all the options in their short and long formats, also
-displaying which ones need a value if they are called (with an @option{=}
-after the long format followed by a string specifying the format, see
-@ref{Options}). A short explanation is also given for what the option is
-for. The program will quit immediately after the message is printed and
-will not do any form of processing, see @ref{--help}.
+(GNU option) Print all options with an explanation and abort.
+Adding this option will print all the options in their short and long formats, 
also displaying which ones need a value if they are called (with an @option{=} 
after the long format followed by a string specifying the format, see 
@ref{Options}).
+A short explanation is also given for what the option is for.
+The program will quit immediately after the message is printed and will not do 
any form of processing, see @ref{--help}.
 
 @item -V
 @itemx --version
-(GNU option) Print a short message, showing the full name, version,
-copyright information and program authors and abort. On the first line, it
-will print the official name (not executable name) and version number of
-the program. Following this is a blank line and a copyright
-information. The program will not run.
+(GNU option) Print a short message, showing the full name, version, copyright 
information and program authors and abort.
+On the first line, it will print the official name (not executable name) and 
version number of the program.
+Following this is a blank line and a copyright information.
+The program will not run.
 
 @item -q
 @itemx --quiet
-Don't report steps. All the programs in Gnuastro that have multiple major
-steps will report their steps for you to follow while they are
-operating. If you do not want to see these reports, you can call this
-option and only error/warning messages will be printed. If the steps are
-done very fast (depending on the properties of your input) disabling these
-reports will also decrease running time.
+Don't report steps.
+All the programs in Gnuastro that have multiple major steps will report their 
steps for you to follow while they are operating.
+If you do not want to see these reports, you can call this option and only 
error/warning messages will be printed.
+If the steps are done very fast (depending on the properties of your input) 
disabling these reports will also decrease running time.
 
 @item --cite
-Print all necessary information to cite and acknowledge Gnuastro in your
-published papers. With this option, the programs will print the Bib@TeX{}
-entry to include in your paper for Gnuastro in general, and the particular
-program's paper (if that program comes with a separate paper). It will also
-print the necessary acknowledgment statement to add in the respective
-section of your paper and it will abort. For a more complete explanation,
-please see @ref{Acknowledgments}.
-
-Citations and acknowledgments are vital for the continued work on
-Gnuastro. Gnuastro started, and is continued, based on separate research
-projects. So if you find any of the tools offered in Gnuastro to be useful
-in your research, please use the output of this command to cite and
-acknowledge the program (and Gnuastro) in your research paper. Thank you.
-
-Gnuastro is still new, there is no separate paper only devoted to Gnuastro
-yet. Therefore currently the paper to cite for Gnuastro is the paper for
-NoiseChisel which is the first published paper introducing Gnuastro to the
-astronomical community. Upon reaching a certain point, a paper completely
-devoted to describing Gnuastro's many functionalities will be published,
-see @ref{GNU Astronomy Utilities 1.0}.
+Print all necessary information to cite and acknowledge Gnuastro in your 
published papers.
+With this option, the programs will print the Bib@TeX{} entry to include in 
your paper for Gnuastro in general, and the particular program's paper (if that 
program comes with a separate paper).
+It will also print the necessary acknowledgment statement to add in the 
respective section of your paper and it will abort.
+For a more complete explanation, please see @ref{Acknowledgments}.
+
+Citations and acknowledgments are vital for the continued work on Gnuastro.
+Gnuastro started, and is continued, based on separate research projects.
+So if you find any of the tools offered in Gnuastro to be useful in your 
research, please use the output of this command to cite and acknowledge the 
program (and Gnuastro) in your research paper.
+Thank you.
+
+Gnuastro is still new, there is no separate paper only devoted to Gnuastro yet.
+Therefore currently the paper to cite for Gnuastro is the paper for 
NoiseChisel which is the first published paper introducing Gnuastro to the 
astronomical community.
+Upon reaching a certain point, a paper completely devoted to describing 
Gnuastro's many functionalities will be published, see @ref{GNU Astronomy 
Utilities 1.0}.
 
 @item -P
 @itemx --printparams
-With this option, Gnuastro's programs will read your command-line options
-and all the configuration files. If there is no problem (like a missing
-parameter or a value in the wrong format or range) and immediately before
-actually running, the programs will print the full list of option names,
-values and descriptions, sorted and grouped by context and abort. They will
-also report the version number, the date they were configured on your
-system and the time they were reported.
-
-As an example, you can give your full command-line options and even the
-input and output file names and finally just add @option{-P} to check if
-all the parameters are finely set. If everything is OK, you can just run
-the same command (easily retrieved from the shell history, with the top
-arrow key) and simply remove the last two characters that showed this
-option.
+With this option, Gnuastro's programs will read your command-line options and 
all the configuration files.
+If there is no problem (like a missing parameter or a value in the wrong 
format or range) and immediately before actually running, the programs will 
print the full list of option names, values and descriptions, sorted and 
grouped by context and abort.
+They will also report the version number, the date they were configured on 
your system and the time they were reported.
 
-Since no program will actually start its processing when this option
-is called, the otherwise mandatory arguments for each program (for
-example input image or catalog files) are no longer required when you
-call this option.
+As an example, you can give your full command-line options and even the input 
and output file names and finally just add @option{-P} to check if all the 
parameters are finely set.
+If everything is OK, you can just run the same command (easily retrieved from 
the shell history, with the top arrow key) and simply remove the last two 
characters that showed this option.
+
+No program will actually start its processing when this option is called.
+The otherwise mandatory arguments for each program (for example input image or 
catalog files) are no longer required when you call this option.
 
 @item --config=STR
-Parse @option{STR} as a configuration file immediately when this option is
-confronted (see @ref{Configuration files}). The @option{--config} option
-can be called multiple times in one run of any Gnuastro program on the
-command-line or in the configuration files. In any case, it will be
-immediately read (before parsing the rest of the options on the
-command-line, or lines in a configuration file).
-
-Note that by definition, options on the command-line still take precedence
-over those in any configuration file, including the file(s) given to this
-option if they are called before it. Also see @option{--lastconfig} and
-@option{--onlyversion} on how this option can be used for reproducible
-results. You can use @option{--checkconfig} (below) to check/confirm the
-parsing of configuration files.
+Parse @option{STR} as a configuration file immediately when this option is 
confronted (see @ref{Configuration files}).
+The @option{--config} option can be called multiple times in one run of any 
Gnuastro program on the command-line or in the configuration files.
+In any case, it will be immediately read (before parsing the rest of the 
options on the command-line, or lines in a configuration file).
+
+Note that by definition, options on the command-line still take precedence 
over those in any configuration file, including the file(s) given to this 
option if they are called before it.
+Also see @option{--lastconfig} and @option{--onlyversion} on how this option 
can be used for reproducible results.
+You can use @option{--checkconfig} (below) to check/confirm the parsing of 
configuration files.
 
 @item --checkconfig
-Print options and their values, within the command-line or configuration
-files, as they are parsed (see @ref{Configuration file precedence}). If an
-option has already been set, or is ignored by the program, this option will
-also inform you with special values like @code{--ALREADY-SET--}. Only
-options that are parsed after this option are printed, so to see the
-parsing of all input options, it is recommended to put this option
-immediately after the program name before any other options.
+Print options and their values, within the command-line or configuration 
files, as they are parsed (see @ref{Configuration file precedence}).
+If an option has already been set, or is ignored by the program, this option 
will also inform you with special values like @code{--ALREADY-SET--}.
+Only options that are parsed after this option are printed, so to see the 
parsing of all input options, it is recommended to put this option immediately 
after the program name before any other options.
 
 @cindex Debug
-This is a very good option to confirm where the value of each option is has
-been defined in scenarios where there are multiple configuration files (for
-debugging).
+This is a very good option to confirm where the value of each option is has 
been defined in scenarios where there are multiple configuration files (for 
debugging).
 
 @item -S
 @itemx --setdirconf
-Update the current directory configuration file for the Gnuastro program
-and quit. The full set of command-line and configuration file options will
-be parsed and options with a value will be written in the current directory
-configuration file for this program (see @ref{Configuration files}). If the
-configuration file or its directory doesn't exist, it will be created. If a
-configuration file exists it will be replaced (after it, and all other
-configuration files have been read). In any case, the program will not run.
-
-This is the recommended method@footnote{Alternatively, you can use your
-favorite text editor.} to edit/set the configuration file for all future
-calls to Gnuastro's programs. It will internally check if your values are
-in the correct range and type and save them according to the configuration
-file format, see @ref{Configuration file format}. So if there are
-unreasonable values to some options, the program will notify you and abort
-before writing the final configuration file.
+Update the current directory configuration file for the Gnuastro program and 
quit.
+The full set of command-line and configuration file options will be parsed and 
options with a value will be written in the current directory configuration 
file for this program (see @ref{Configuration files}).
+If the configuration file or its directory doesn't exist, it will be created.
+If a configuration file exists it will be replaced (after it, and all other 
configuration files have been read).
+In any case, the program will not run.
+
+This is the recommended method@footnote{Alternatively, you can use your 
favorite text editor.} to edit/set the configuration file for all future calls 
to Gnuastro's programs.
+It will internally check if your values are in the correct range and type and 
save them according to the configuration file format, see @ref{Configuration 
file format}.
+So if there are unreasonable values to some options, the program will notify 
you and abort before writing the final configuration file.
 
 When this option is called, the otherwise mandatory arguments, for
 example input image or catalog file(s), are no longer mandatory (since
@@ -8154,51 +6115,32 @@ the program will not run).
 
 @item -U
 @itemx --setusrconf
-Update the user configuration file and quit (see @ref{Configuration
-files}). See explanation under @option{--setdirconf} for more details.
+Update the user configuration file and quit (see @ref{Configuration files}).
+See explanation under @option{--setdirconf} for more details.
 
 @item --lastconfig
-This is the last configuration file that must be read. When this option is
-confronted in any stage of reading the options (on the command-line or in a
-configuration file), no other configuration file will be parsed, see
-@ref{Configuration file precedence} and @ref{Current directory and User
-wide}. Like all on/off options, on the command-line, this option doesn't
-take any values. But in a configuration file, it takes the values of
-@option{0} or @option{1}, see @ref{Configuration file format}. If it is
-present in a configuration file with a value of @option{0}, then all later
-occurrences of this option will be ignored.
+This is the last configuration file that must be read.
+When this option is confronted in any stage of reading the options (on the 
command-line or in a configuration file), no other configuration file will be 
parsed, see @ref{Configuration file precedence} and @ref{Current directory and 
User wide}.
+Like all on/off options, on the command-line, this option doesn't take any 
values.
+But in a configuration file, it takes the values of @option{0} or @option{1}, 
see @ref{Configuration file format}.
+If it is present in a configuration file with a value of @option{0}, then all 
later occurrences of this option will be ignored.
 
 
 @item --onlyversion=STR
-Only run the program if Gnuastro's version is exactly equal to @option{STR}
-(see @ref{Version numbering}). Note that it is not compared as a number,
-but as a string of characters, so @option{0}, or @option{0.0} and
-@option{0.00} are different. If the running Gnuastro version is different,
-then this option will report an error and abort as soon as it is
-confronted on the command-line or in a configuration file. If the running
-Gnuastro version is the same as @option{STR}, then the program will run as
-if this option was not called.
-
-This is useful if you want your results to be exactly reproducible and not
-mistakenly run with an updated/newer or older version of the
-program. Besides internal algorithmic/behavior changes in programs, the
-existence of options or their names might change between versions
-(especially in these earlier versions of Gnuastro).
-
-Hence, when using this option (probably in a script or in a configuration
-file), be sure to call it before other options. The benefit is that, when
-the version differs, the other options won't be parsed and you, or your
-collaborators/users, won't get errors saying an option in your
-configuration doesn't exist in the running version of the program.
-
-Here is one example of how this option can be used in conjunction with the
-@option{--lastconfig} option. Let's assume that you were satisfied with the
-results of this command: @command{astnoisechisel image.fits --snquant=0.95}
-(along with various options set in various configuration files). You can
-save the state of NoiseChisel and reproduce that exact result on
-@file{image.fits} later by following these steps (the the extra spaces, and
-@key{\}, are only for easy readability, if you want to try it out, only one
-space between each token is enough).
+Only run the program if Gnuastro's version is exactly equal to @option{STR} 
(see @ref{Version numbering}).
+Note that it is not compared as a number, but as a string of characters, so 
@option{0}, or @option{0.0} and @option{0.00} are different.
+If the running Gnuastro version is different, then this option will report an 
error and abort as soon as it is confronted on the command-line or in a 
configuration file.
+If the running Gnuastro version is the same as @option{STR}, then the program 
will run as if this option was not called.
+
+This is useful if you want your results to be exactly reproducible and not 
mistakenly run with an updated/newer or older version of the program.
+Besides internal algorithmic/behavior changes in programs, the existence of 
options or their names might change between versions (especially in these 
earlier versions of Gnuastro).
+
+Hence, when using this option (probably in a script or in a configuration 
file), be sure to call it before other options.
+The benefit is that, when the version differs, the other options won't be 
parsed and you, or your collaborators/users, won't get errors saying an option 
in your configuration doesn't exist in the running version of the program.
+
+Here is one example of how this option can be used in conjunction with the 
@option{--lastconfig} option.
+Let's assume that you were satisfied with the results of this command: 
@command{astnoisechisel image.fits --snquant=0.95} (along with various options 
set in various configuration files).
+You can save the state of NoiseChisel and reproduce that exact result on 
@file{image.fits} later by following these steps (the extra spaces, and 
@key{\}, are only for easy readability, if you want to try it out, only one 
space between each token is enough).
 
 @example
 $ echo "onlyversion X.XX"             > reproducible.conf
@@ -8207,55 +6149,40 @@ $ astnoisechisel image.fits --snquant=0.95 -P           
 \
                                      >> reproducible.conf
 @end example
 
-@option{--onlyversion} was available from Gnuastro 0.0, so putting it
-immediately at the start of a configuration file will ensure that later,
-you (or others using different version) won't get a non-recognized option
-error in case an option was added/removed. @option{--lastconfig} will
-inform the installed NoiseChisel to not parse any other configuration
-files. This is done because we don't want the user's user-wide or system
-wide option values affecting our results. Finally, with the third command,
-which has a @option{-P} (short for @option{--printparams}), NoiseChisel
-will print all the option values visible to it (in all the configuration
-files) and the shell will append them to @file{reproduce.conf}. Hence, you
-don't have to worry about remembering the (possibly) different options in
-the different configuration files.
+@option{--onlyversion} was available from Gnuastro 0.0, so putting it 
immediately at the start of a configuration file will ensure that later, you 
(or others using different version) won't get a non-recognized option error in 
case an option was added/removed.
+@option{--lastconfig} will inform the installed NoiseChisel to not parse any 
other configuration files.
+This is done because we don't want the user's user-wide or system wide option 
values affecting our results.
+Finally, with the third command, which has a @option{-P} (short for 
@option{--printparams}), NoiseChisel will print all the option values visible 
to it (in all the configuration files) and the shell will append them to 
@file{reproduce.conf}.
+Hence, you don't have to worry about remembering the (possibly) different 
options in the different configuration files.
 
-Afterwards, if you run NoiseChisel as shown below (telling it to read this
-configuration file with the @file{--config} option). You can be sure that
-there will either be an error (for version mismatch) or it will produce
-exactly the same result that you got before.
+Afterwards, if you run NoiseChisel as shown below (telling it to read this 
configuration file with the @file{--config} option).
+You can be sure that there will either be an error (for version mismatch) or 
it will produce exactly the same result that you got before.
 
 @example
 $ astnoisechisel --config=reproducible.conf
 @end example
 
 @item --log
-Some programs can generate extra information about their outputs in a log
-file. When this option is called in those programs, the log file will also
-be printed. If the program doesn't generate a log file, this option is
-ignored.
+Some programs can generate extra information about their outputs in a log file.
+When this option is called in those programs, the log file will also be 
printed.
+If the program doesn't generate a log file, this option is ignored.
 
 @cartouche
 @noindent
-@strong{@option{--log} isn't thread-safe}: The log file usually has a fixed
-name. Therefore if two simultaneous calls (with @option{--log}) of a
-program are made in the same directory, the program will try to write to
-the same file. This will cause problems like unreasonable log file,
-undefined behavior, or a crash.
+@strong{@option{--log} isn't thread-safe}: The log file usually has a fixed 
name.
+Therefore if two simultaneous calls (with @option{--log}) of a program are 
made in the same directory, the program will try to write to he same file.
+This will cause problems like unreasonable log file, undefined behavior, or a 
crash.
 @end cartouche
 
 @cindex CPU threads, set number
 @cindex Number of CPU threads to use
 @item -N INT
 @itemx --numthreads=INT
-Use @option{INT} CPU threads when running a Gnuastro program (see
-@ref{Multi-threaded operations}). If the value is zero (@code{0}), or this
-option is not given on the command-line or any configuration file, the
-value will be determined at run-time: the maximum number of threads
-available to the system when you run a Gnuastro program.
+Use @option{INT} CPU threads when running a Gnuastro program (see 
@ref{Multi-threaded operations}).
+If the value is zero (@code{0}), or this option is not given on the 
command-line or any configuration file, the value will be determined at 
run-time: the maximum number of threads available to the system when you run a 
Gnuastro program.
 
-Note that multi-threaded programming is only relevant to some programs. In
-others, this option will be ignored.
+Note that multi-threaded programming is only relevant to some programs.
+In others, this option will be ignored.
 
 @end vtable
 
@@ -8268,94 +6195,67 @@ others, this option will be ignored.
 
 @cindex Standard input
 @cindex Stream: standard input
-The most common way to feed the primary/first input dataset into a program
-is to give its filename as an argument (discussed in @ref{Arguments}). When
-you want to run a series of programs in sequence, this means that each will
-have to keep the output of each program in a separate file and re-type that
-file's name in the next command. This can be very slow and frustrating
-(mis-typing a file's name).
+The most common way to feed the primary/first input dataset into a program is 
to give its filename as an argument (discussed in @ref{Arguments}).
+When you want to run a series of programs in sequence, this means that each 
will have to keep the output of each program in a separate file and re-type 
that file's name in the next command.
+This can be very slow and frustrating (mis-typing a file's name).
 
 @cindex Standard output stream
 @cindex Stream: standard output
-To solve the problem, the founders of Unix defined pipes to directly feed
-the output of one program (its ``Standard output'' stream) into the
-``standard input'' of a next program. This removes the need to make
-temporary files between separate processes and became one of the best
-demonstrations of the Unix-way, or Unix philosophy.
-
-Every program has three streams identifying where it reads/writes non-file
-inputs/outputs: @emph{Standard input}, @emph{Standard output}, and
-@emph{Standard error}. When a program is called alone, all three are
-directed to the terminal that you are using. If it needs an input, it will
-prompt you for one and you can type it in. Or, it prints its results in the
-terminal for you to see.
-
-For example, say you have a FITS table/catalog containing the B and V band
-magnitudes (@code{MAG_B} and @code{MAG_V} columns) of a selection of
-galaxies along with many other columns. If you want to see only these two
-columns in your terminal, can use Gnuastro's @ref{Table} program like
-below:
+To solve the problem, the founders of Unix defined pipes to directly feed the 
output of one program (its ``Standard output'' stream) into the ``standard 
input'' of a next program.
+This removes the need to make temporary files between separate processes and 
became one of the best demonstrations of the Unix-way, or Unix philosophy.
+
+Every program has three streams identifying where it reads/writes non-file 
inputs/outputs: @emph{Standard input}, @emph{Standard output}, and 
@emph{Standard error}.
+When a program is called alone, all three are directed to the terminal that 
you are using.
+If it needs an input, it will prompt you for one and you can type it in.
+Or, it prints its results in the terminal for you to see.
+
+For example, say you have a FITS table/catalog containing the B and V band 
magnitudes (@code{MAG_B} and @code{MAG_V} columns) of a selection of galaxies 
along with many other columns.
+If you want to see only these two columns in your terminal, can use Gnuastro's 
@ref{Table} program like below:
 
 @example
 $ asttable cat.fits -cMAG_B,MAG_V
 @end example
 
-Through the Unix pipe mechanism, when the shell confronts the pipe
-character (@key{|}), it connects the standard output of the program before
-the pipe, to the standard input of the program after it. So it is literally
-a ``pipe'': everything that you would see printed by the first program on
-the command (without any pipe), is now passed to the second program (and
-not seen by you).
+Through the Unix pipe mechanism, when the shell confronts the pipe character 
(@key{|}), it connects the standard output of the program before the pipe, to 
the standard input of the program after it.
+So it is literally a ``pipe'': everything that you would see printed by the 
first program on the command (without any pipe), is now passed to the second 
program (and not seen by you).
 
 @cindex AWK
 @cindex GNU AWK
-To continue the previous example, let's say you want to see the B-V
-color. To do this, you can pipe Table's output to AWK (a wonderful tool for
-processing things like plain text tables):
+To continue the previous example, let's say you want to see the B-V color.
+To do this, you can pipe Table's output to AWK (a wonderful tool for 
processing things like plain text tables):
 
 @example
 $ asttable cat.fits -cMAG_B,MAG_V | awk '@{print $1-$2@}'
 @end example
 
-But understanding the distribution by visually seeing all the numbers under
-each other is not too useful! You can therefore feed this single column
-information into @ref{Statistics} to give you a general feeling of the
-distribution with the same command:
+But understanding the distribution by visually seeing all the numbers under 
each other is not too useful! You can therefore feed this single column 
information into @ref{Statistics} to give you a general feeling of the 
distribution with the same command:
 
 @example
 $ asttable cat.fits -cMAG_B,MAG_V | awk '@{print $1-$2@}' | aststatistics
 @end example
 
-Gnuastro's programs that accept input from standard input, only look into
-the Standard input stream if there is no first argument. In other words,
-arguments take precedence over Standard input. When no argument is
-provided, the programs check if the standard input stream is already full
-or not (output from another program is waiting to be used). If data is
-present in the standard input stream, it is used.
+Gnuastro's programs that accept input from standard input, only look into the 
Standard input stream if there is no first argument.
+In other words, arguments take precedence over Standard input.
+When no argument is provided, the programs check if the standard input stream 
is already full or not (output from another program is waiting to be used).
+If data is present in the standard input stream, it is used.
 
-When the standard input is empty, the program will wait
-@option{--stdintimeout} micro-seconds for you to manually enter the first
-line (ending with a new-line character, or the @key{ENTER} key, see
-@ref{Input output options}). If it detects the first line in this time,
-there is no more time limit, and you can manually write/type all the lines
-for as long as it takes. To inform the program that Standard input has
-finished, press @key{CTRL-D} after a new line. If the program doesn't catch
-the first line before the time-out finishes, it will abort with an error
-saying that no input was provided.
+When the standard input is empty, the program will wait 
@option{--stdintimeout} micro-seconds for you to manually enter the first line 
(ending with a new-line character, or the @key{ENTER} key, see @ref{Input 
output options}).
+If it detects the first line in this time, there is no more time limit, and 
you can manually write/type all the lines for as long as it takes.
+To inform the program that Standard input has finished, press @key{CTRL-D} 
after a new line.
+If the program doesn't catch the first line before the time-out finishes, it 
will abort with an error saying that no input was provided.
 
 @cartouche
 @noindent
-@strong{Manual input in Standard input is discarded: } Be careful that when
-you manually fill the Standard input, the data will be discarded once the
-program finishes and reproducing the result will be impossible. Therefore
-this form of providing input is only good for temporary tests.
+@strong{Manual input in Standard input is discarded:}
+Be careful that when you manually fill the Standard input, the data will be 
discarded once the program finishes and reproducing the result will be 
impossible.
+Therefore this form of providing input is only good for temporary tests.
 @end cartouche
 
 @cartouche
 @noindent
-@strong{Standard input currently only for plain text: } Currently Standard
-input only works for plain text inputs like the example above. We will
-later allow FITS files into the programs through standard input also.
+@strong{Standard input currently only for plain text:}
+Currently Standard input only works for plain text inputs like the example 
above.
+We will later allow FITS files into the programs through standard input also.
 @end cartouche
 
 
@@ -8367,14 +6267,10 @@ later allow FITS files into the programs through 
standard input also.
 @cindex Necessary parameters
 @cindex Default option values
 @cindex File system Hierarchy Standard
-Each program needs a certain number of parameters to run. Supplying
-all the necessary parameters each time you run the program is very
-frustrating and prone to errors. Therefore all the programs read the
-values for the necessary options you have not given in the command
-line from one of several plain text files (which you can view and edit
-with any text editor). These files are known as configuration files
-and are usually kept in a directory named @file{etc/} according to the
-file system hierarchy
+Each program needs a certain number of parameters to run.
+Supplying all the necessary parameters each time you run the program is very 
frustrating and prone to errors.
+Therefore all the programs read the values for the necessary options you have 
not given in the command line from one of several plain text files (which you 
can view and edit with any text editor).
+These files are known as configuration files and are usually kept in a 
directory named @file{etc/} according to the file system hierarchy
 
standard@footnote{@url{http://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard}}.
 
 @vindex --output
@@ -8382,23 +6278,12 @@ 
standard@footnote{@url{http://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standar
 @cindex CPU threads, number
 @cindex Internal default value
 @cindex Number of CPU threads to use
-The thing to have in mind is that none of the programs in Gnuastro keep any
-internal default value. All the values must either be stored in one of the
-configuration files or explicitly called in the command-line. In case the
-necessary parameters are not given through any of these methods, the
-program will print a missing option error and abort. The only exception to
-this is @option{--numthreads}, whose default value is determined at
-run-time using the number of threads available to your system, see
-@ref{Multi-threaded operations}. Of course, you can still provide a default
-value for the number of threads at any of the levels below, but if you
-don't, the program will not abort. Also note that through automatic output
-name generation, the value to the @option{--output} option is also not
-mandatory on the command-line or in the configuration files for all
-programs which don't rely on that value as an input@footnote{One example of
-a program which uses the value given to @option{--output} as an input is
-ConvertType, this value specifies the type of the output through the value
-to @option{--output}, see @ref{Invoking astconvertt}.}, see @ref{Automatic
-output}.
+The thing to have in mind is that none of the programs in Gnuastro keep any 
internal default value.
+All the values must either be stored in one of the configuration files or 
explicitly called in the command-line.
+In case the necessary parameters are not given through any of these methods, 
the program will print a missing option error and abort.
+The only exception to this is @option{--numthreads}, whose default value is 
determined at run-time using the number of threads available to your system, 
see @ref{Multi-threaded operations}.
+Of course, you can still provide a default value for the number of threads at 
any of the levels below, but if you don't, the program will not abort.
+Also note that through automatic output name generation, the value to the 
@option{--output} option is also not mandatory on the command-line or in the 
configuration files for all programs which don't rely on that value as an 
input@footnote{One example of a program which uses the value given to 
@option{--output} as an input is ConvertType, this value specifies the type of 
the output through the value to @option{--output}, see @ref{Invoking 
astconvertt}.}, see @ref{Automatic output}.
 
 
 
@@ -8413,45 +6298,30 @@ output}.
 @subsection Configuration file format
 
 @cindex Configuration file suffix
-The configuration files for each program have the standard program
-executable name with a `@file{.conf}' suffix. When you download the source
-code, you can find them in the same directory as the source code of each
-program, see @ref{Program source}.
+The configuration files for each program have the standard program executable 
name with a `@file{.conf}' suffix.
+When you download the source code, you can find them in the same directory as 
the source code of each program, see @ref{Program source}.
 
 @cindex White space character
 @cindex Configuration file format
-Any line in the configuration file whose first non-white character is a
-@key{#} is considered to be a comment and is ignored. An empty line is also
-similarly ignored. The long name of the option should be used as an
-identifier. The parameter name and parameter value have to be separated by
-any number of `white-space' characters: space, tab or vertical tab. By
-default several space characters are used. If the value of an option has
-space characters (most commonly for the @option{hdu} option), then the full
-value can be enclosed in double quotation signs (@key{"}, similar to the
-example in @ref{Arguments and options}). If it is an option without a value
-in the @option{--help} output (on/off option, see @ref{Options}), then the
-value should be @option{1} if it is to be `on' and @option{0} otherwise.
-
-In each non-commented and non-blank line, any text after the first two
-words (option identifier and value) is ignored. If an option identifier is
-not recognized in the configuration file, the name of the file, the line
-number of the unrecognized option, and the unrecognized identifier name
-will be reported and the program will abort. If a parameter is repeated
-more more than once in the configuration files, accepts only one value, and
-is not set on the command-line, then only the first value will be used, the
-rest will be ignored.
+Any line in the configuration file whose first non-white character is a 
@key{#} is considered to be a comment and is ignored.
+An empty line is also similarly ignored.
+The long name of the option should be used as an identifier.
+The parameter name and parameter value have to be separated by any number of 
`white-space' characters: space, tab or vertical tab.
+By default several space characters are used.
+If the value of an option has space characters (most commonly for the 
@option{hdu} option), then the full value can be enclosed in double quotation 
signs (@key{"}, similar to the example in @ref{Arguments and options}).
+If it is an option without a value in the @option{--help} output (on/off 
option, see @ref{Options}), then the value should be @option{1} if it is to be 
`on' and @option{0} otherwise.
+
+In each non-commented and non-blank line, any text after the first two words 
(option identifier and value) is ignored.
+If an option identifier is not recognized in the configuration file, the name 
of the file, the line number of the unrecognized option, and the unrecognized 
identifier name will be reported and the program will abort.
+If a parameter is repeated more more than once in the configuration files, 
accepts only one value, and is not set on the command-line, then only the first 
value will be used, the rest will be ignored.
 
 @cindex Writing configuration files
 @cindex Automatic configuration file writing
 @cindex Configuration files, writing
-You can build or edit any of the directories and the configuration files
-yourself using any text editor.  However, it is recommended to use the
-@option{--setdirconf} and @option{--setusrconf} options to set default
-values for the current directory or this user, see @ref{Operating mode
-options}. With these options, the values you give will be checked before
-writing in the configuration file. They will also print a set of commented
-lines guiding the reader and will also classify the options based on their
-context and write them in their logical order to be more understandable.
+You can build or edit any of the directories and the configuration files 
yourself using any text editor.
+However, it is recommended to use the @option{--setdirconf} and 
@option{--setusrconf} options to set default values for the current directory 
or this user, see @ref{Operating mode options}.
+With these options, the values you give will be checked before writing in the 
configuration file.
+They will also print a set of commented lines guiding the reader and will also 
classify the options based on their context and write them in their logical 
order to be more understandable.
 
 
 @node Configuration file precedence, Current directory and User wide, 
Configuration file format, Configuration files
@@ -8460,105 +6330,68 @@ context and write them in their logical order to be 
more understandable.
 @cindex Configuration file precedence
 @cindex Configuration file directories
 @cindex Precedence, configuration files
-The option values in all the programs of Gnuastro will be filled in the
-following order. If an option only takes one value which is given in an
-earlier step, any value for that option in a later step will be
-ignored. Note that if the @option{lastconfig} option is specified in any
-step below, no other configuration files will be parsed (see @ref{Operating
-mode options}).
+The option values in all the programs of Gnuastro will be filled in the 
following order.
+If an option only takes one value which is given in an earlier step, any value 
for that option in a later step will be ignored.
+Note that if the @option{lastconfig} option is specified in any step below, no 
other configuration files will be parsed (see @ref{Operating mode options}).
 
 @enumerate
 @item
 Command-line options, for a particular run of ProgramName.
 
 @item
-@file{.gnuastro/astprogname.conf} is parsed by ProgramName in the current
-directory.
+@file{.gnuastro/astprogname.conf} is parsed by ProgramName in the current 
directory.
 
 @item
-@file{.gnuastro/gnuastro.conf} is parsed by all Gnuastro programs in the
-current directory.
+@file{.gnuastro/gnuastro.conf} is parsed by all Gnuastro programs in the 
current directory.
 
 @item
-@file{$HOME/.local/etc/astprogname.conf} is parsed by ProgramName in the
-user's home directory (see @ref{Current directory and User wide}).
+@file{$HOME/.local/etc/astprogname.conf} is parsed by ProgramName in the 
user's home directory (see @ref{Current directory and User wide}).
 
 @item
-@file{$HOME/.local/etc/gnuastro.conf} is parsed by all Gnuastro programs in
-the user's home directory (see @ref{Current directory and User wide}).
+@file{$HOME/.local/etc/gnuastro.conf} is parsed by all Gnuastro programs in 
the user's home directory (see @ref{Current directory and User wide}).
 
 @item
-@file{prefix/etc/astprogname.conf} is parsed by ProgramName in the
-system-wide installation directory (see @ref{System wide} for
-@file{prefix}).
+@file{prefix/etc/astprogname.conf} is parsed by ProgramName in the system-wide 
installation directory (see @ref{System wide} for @file{prefix}).
 
 @item
-@file{prefix/etc/gnuastro.conf} is parsed by all Gnuastro programs in the
-system-wide installation directory (see @ref{System wide} for
-@file{prefix}).
+@file{prefix/etc/gnuastro.conf} is parsed by all Gnuastro programs in the 
system-wide installation directory (see @ref{System wide} for @file{prefix}).
 
 @end enumerate
 
-The basic idea behind setting this progressive state of checking for
-parameter values is that separate users of a computer or separate folders
-in a user's file system might need different values for some
-parameters.
+The basic idea behind setting this progressive state of checking for parameter 
values is that separate users of a computer or separate folders in a user's 
file system might need different values for some parameters.
 
 @cartouche
 @noindent
-@strong{Checking the order:} You can confirm/check the order of parsing
-configuration files using the @option{--checkconfig} option with any
-Gnuastro program, see @ref{Operating mode options}. Just be sure to place
-this option immediately after the program name, before any other option.
+@strong{Checking the order:}
+You can confirm/check the order of parsing configuration files using the 
@option{--checkconfig} option with any Gnuastro program, see @ref{Operating 
mode options}.
+Just be sure to place this option immediately after the program name, before 
any other option.
 @end cartouche
 
-As you see above, there can also be a configuration file containing the
-common options in all the programs: @file{gnuastro.conf} (see @ref{Common
-options}). If options specific to one program are specified in this file,
-there will be unrecognized option errors, or unexpected behavior if the
-option has different behavior in another program. On the other hand, there
-is no problem with @file{astprogname.conf} containing common
-options@footnote{As an example, the @option{--setdirconf} and
-@option{--setusrconf} options will also write the common options they have
-read in their produced @file{astprogname.conf}.}.
+As you see above, there can also be a configuration file containing the common 
options in all the programs: @file{gnuastro.conf} (see @ref{Common options}).
+If options specific to one program are specified in this file, there will be 
unrecognized option errors, or unexpected behavior if the option has different 
behavior in another program.
+On the other hand, there is no problem with @file{astprogname.conf} containing 
common options@footnote{As an example, the @option{--setdirconf} and 
@option{--setusrconf} options will also write the common options they have read 
in their produced @file{astprogname.conf}.}.
 
 @cartouche
 @noindent
-@strong{Manipulating the order:} You can manipulate this order or add new
-files with the following two options which are fully described in
+@strong{Manipulating the order:} You can manipulate this order or add new 
files with the following two options which are fully described in
 @ref{Operating mode options}:
 @table @option
 @item --config
-Allows you to define any file to be parsed as a configuration file on the
-command-line or within the any other configuration file. Recall that the
-file given to @option{--config} is parsed immediately when this option is
-confronted (on the command-line or in a configuration file).
+Allows you to define any file to be parsed as a configuration file on the 
command-line or within the any other configuration file.
+Recall that the file given to @option{--config} is parsed immediately when 
this option is confronted (on the command-line or in a configuration file).
 
 @item --lastconfig
-Allows you to stop the parsing of subsequent configuration files. Note that
-if this option is given in a configuration file, it will be fully read, so
-its position in the configuration doesn't matter (unlike
-@option{--config}).
+Allows you to stop the parsing of subsequent configuration files.
+Note that if this option is given in a configuration file, it will be fully 
read, so its position in the configuration doesn't matter (unlike 
@option{--config}).
 @end table
 @end cartouche
 
+One example of benefiting from these configuration files can be this: raw 
telescope images usually have their main image extension in the second FITS 
extension, while processed FITS images usually only have one extension.
+If your system-wide default input extension is 0 (the first), then when you 
want to work with the former group of data you have to explicitly mention it to 
the programs every time.
+With this progressive state of default values to check, you can set different 
default values for the different directories that you would like to run 
Gnuastro in for your different purposes, so you won't have to worry about this 
issue any more.
 
-One example of benefiting from these configuration files can be this: raw
-telescope images usually have their main image extension in the second FITS
-extension, while processed FITS images usually only have one extension. If
-your system-wide default input extension is 0 (the first), then when you
-want to work with the former group of data you have to explicitly mention
-it to the programs every time. With this progressive state of default
-values to check, you can set different default values for the different
-directories that you would like to run Gnuastro in for your different
-purposes, so you won't have to worry about this issue any more.
-
-The same can be said about the @file{gnuastro.conf} files: by specifying a
-behavior in this single file, all Gnuastro programs in the respective
-directory, user, or system-wide steps will behave similarly. For example to
-keep the input's directory when no specific output is given (see
-@ref{Automatic output}), or to not delete an existing file if it has the
-same name as a given output (see @ref{Input output options}).
+The same can be said about the @file{gnuastro.conf} files: by specifying a 
behavior in this single file, all Gnuastro programs in the respective 
directory, user, or system-wide steps will behave similarly.
+For example to keep the input's directory when no specific output is given 
(see @ref{Automatic output}), or to not delete an existing file if it has the 
same name as a given output (see @ref{Input output options}).
 
 
 @node Current directory and User wide, System wide, Configuration file 
precedence, Configuration files
@@ -8567,24 +6400,17 @@ same name as a given output (see @ref{Input output 
options}).
 @cindex @file{$HOME}
 @cindex @file{./.gnuastro/}
 @cindex @file{$HOME/.local/etc/}
-For the current (local) and user-wide directories, the configuration files
-are stored in the hidden sub-directories named @file{.gnuastro/} and
-@file{$HOME/.local/etc/} respectively. Unless you have changed it, the
-@file{$HOME} environment variable should point to your home directory. You
-can check it by running @command{$ echo $HOME}. Each time you run any of
-the programs in Gnuastro, this environment variable is read and placed in
-the above address. So if you suddenly see that your home configuration
-files are not being read, probably you (or some other program) has changed
-the value of this environment variable.
+For the current (local) and user-wide directories, the configuration files are 
stored in the hidden sub-directories named @file{.gnuastro/} and 
@file{$HOME/.local/etc/} respectively.
+Unless you have changed it, the @file{$HOME} environment variable should point 
to your home directory.
+You can check it by running @command{$ echo $HOME}.
+Each time you run any of the programs in Gnuastro, this environment variable 
is read and placed in the above address.
+So if you suddenly see that your home configuration files are not being read, 
probably you (or some other program) has changed the value of this environment 
variable.
 
 @vindex --setdirconf
 @vindex --setusrconf
-Although it might cause confusions like above, this dependence on the
-@file{HOME} environment variable enables you to temporarily use a different
-directory as your home directory. This can come in handy in complicated
-situations. To set the user or current directory configuration files based
-on your command-line input, you can use the @option{--setdirconf} or
-@option{--setusrconf}, see @ref{Operating mode options}.
+Although it might cause confusions like above, this dependence on the 
@file{HOME} environment variable enables you to temporarily use a different 
directory as your home directory.
+This can come in handy in complicated situations.
+To set the user or current directory configuration files based on your 
command-line input, you can use the @option{--setdirconf} or 
@option{--setusrconf}, see @ref{Operating mode options}.
 
 
 
@@ -8594,28 +6420,16 @@ on your command-line input, you can use the 
@option{--setdirconf} or
 @cindex @file{prefix/etc/}
 @cindex System wide configuration files
 @cindex Configuration files, system wide
-When Gnuastro is installed, the configuration files that are shipped with
-the distribution are copied into the (possibly system wide)
-@file{prefix/etc/} directory. For more details on @file{prefix}, see
-@ref{Installation directory} (by default it is: @file{/usr/local}). This
-directory is the final place (with the lowest priority) that the programs
-in Gnuastro will check to retrieve parameter values.
+When Gnuastro is installed, the configuration files that are shipped with the 
distribution are copied into the (possibly system wide) @file{prefix/etc/} 
directory.
+For more details on @file{prefix}, see @ref{Installation directory} (by 
default it is: @file{/usr/local}).
+This directory is the final place (with the lowest priority) that the programs 
in Gnuastro will check to retrieve parameter values.
 
-If you remove an option and its value from the system wide configuration
-files, you either have to specify it in more immediate configuration files
-or set it each time in the command-line. Recall that none of the programs
-in Gnuastro keep any internal default values and will abort if they don't
-find a value for the necessary parameters (except the number of threads and
-output file name). So even though you might never expect to use an optional
-option, it safe to have it available in this system-wide configuration file
-even if you don't intend to use it frequently.
+If you remove an option and its value from the system wide configuration 
files, you either have to specify it in more immediate configuration files or 
set it each time in the command-line.
+Recall that none of the programs in Gnuastro keep any internal default values 
and will abort if they don't find a value for the necessary parameters (except 
the number of threads and output file name).
+So even though you might never expect to use an optional option, it safe to 
have it available in this system-wide configuration file even if you don't 
intend to use it frequently.
 
-Note that in case you install Gnuastro from your distribution's
-repositories, @file{prefix} will either be set to @file{/} (the root
-directory) or @file{/usr}, so you can find the system wide configuration
-variables in @file{/etc/} or @file{/usr/etc/}. The prefix of
-@file{/usr/local/} is conventionally used for programs you install from
-source by your self as in @ref{Quick start}.
+Note that in case you install Gnuastro from your distribution's repositories, 
@file{prefix} will either be set to @file{/} (the root directory) or 
@file{/usr}, so you can find the system wide configuration variables in 
@file{/etc/} or @file{/usr/etc/}.
+The prefix of @file{/usr/local/} is conventionally used for programs you 
install from source by your self as in @ref{Quick start}.
 
 
 
@@ -8633,43 +6447,30 @@ source by your self as in @ref{Quick start}.
 @cindex Book formats
 @cindex Remembering options
 @cindex Convenient book formats
-Probably the first time you read this book, it is either in the PDF
-or HTML formats. These two formats are very convenient for when you
-are not actually working, but when you are only reading. Later on,
-when you start to use the programs and you are deep in the middle of
-your work, some of the details will inevitably be forgotten. Going to
-find the PDF file (printed or digital) or the HTML webpage is a major
-distraction.
+Probably the first time you read this book, it is either in the PDF or HTML 
formats.
+These two formats are very convenient for when you are not actually working, 
but when you are only reading.
+Later on, when you start to use the programs and you are deep in the middle of 
your work, some of the details will inevitably be forgotten.
+Going to find the PDF file (printed or digital) or the HTML webpage is a major 
distraction.
 
 @cindex Online help
 @cindex Command-line help
-GNU software have a very unique set of tools for aiding your memory on
-the command-line, where you are working, depending how much of it you
-need to remember. In the past, such command-line help was known as
-``online'' help, because they were literally provided to you `on'
-the command `line'. However, nowadays the word ``online'' refers to
-something on the internet, so that term will not be used. With this
-type of help, you can resume your exciting research without taking
-your hands off the keyboard.
+GNU software have a very unique set of tools for aiding your memory on the 
command-line, where you are working, depending how much of it you need to 
remember.
+In the past, such command-line help was known as ``online'' help, because they 
were literally provided to you `on' the command `line'.
+However, nowadays the word ``online'' refers to something on the internet, so 
that term will not be used.
+With this type of help, you can resume your exciting research without taking 
your hands off the keyboard.
 
 @cindex Installed help methods
-Another major advantage of such command-line based help routines is
-that they are installed with the software in your computer, therefore
-they are always in sync with the executable you are actually
-running. Three of them are actually part of the executable. You don't
-have to worry about the version of the book or program. If you rely
-on external help (a PDF in your personal print or digital archive or
-HTML from the official webpage) you have to check to see if their
-versions fit with your installed program.
-
-If you only need to remember the short or long names of the options,
-@option{--usage} is advised. If it is what the options do, then
-@option{--help} is a great tool. Man pages are also provided for those
-who are use to this older system of documentation. This full book is
-also available to you on the command-line in Info format. If none of
-these seems to resolve the problems, there is a mailing list which
-enables you to get in touch with experienced Gnuastro users. In the
-subsections below each of these methods are reviewed.
+Another major advantage of such command-line based help routines is that they 
are installed with the software in your computer, therefore they are always in 
sync with the executable you are actually running.
+Three of them are actually part of the executable.
+You don't have to worry about the version of the book or program.
+If you rely on external help (a PDF in your personal print or digital archive 
or HTML from the official webpage) you have to check to see if their versions 
fit with your installed program.
+
+If you only need to remember the short or long names of the options, 
@option{--usage} is advised.
+If it is what the options do, then @option{--help} is a great tool.
+Man pages are also provided for those who are use to this older system of 
documentation.
+This full book is also available to you on the command-line in Info format.
+If none of these seems to resolve the problems, there is a mailing list which 
enables you to get in touch with experienced Gnuastro users.
+In the subsections below each of these methods are reviewed.
 
 
 @menu
@@ -8686,10 +6487,10 @@ subsections below each of these methods are reviewed.
 @cindex Usage pattern
 @cindex Mandatory arguments
 @cindex Optional and mandatory tokens
-If you give this option, the program will not run. It will only print a
-very concise message showing the options and arguments. Everything within
-square brackets (@option{[]}) is optional. For example here are the first
-and last two lines of Crop's @option{--usage} is shown:
+If you give this option, the program will not run.
+It will only print a very concise message showing the options and arguments.
+Everything within square brackets (@option{[]}) is optional.
+For example here are the first and last two lines of Crop's @option{--usage} 
is shown:
 
 @example
 $ astcrop --usage
@@ -8700,77 +6501,65 @@ Usage: astcrop [-Do?IPqSVW] [-d INT] [-h INT] [-r INT] 
[-w INT]
             [ASCIIcatalog] FITSimage(s).fits
 @end example
 
-There are no explanations on the options, just their short and long
-names shown separately. After the program name, the short format of
-all the options that don't require a value (on/off options) is
-displayed. Those that do require a value then follow in separate
-brackets, each displaying the format of the input they want, see
-@ref{Options}. Since all options are optional, they are shown in
-square brackets, but arguments can also be optional. For example in
-this example, a catalog name is optional and is only required in some
-modes. This is a standard method of displaying optional arguments for
-all GNU software.
+There are no explanations on the options, just their short and long names 
shown separately.
+After the program name, the short format of all the options that don't require 
a value (on/off options) is displayed.
+Those that do require a value then follow in separate brackets, each 
displaying the format of the input they want, see @ref{Options}.
+Since all options are optional, they are shown in square brackets, but 
arguments can also be optional.
+For example in this example, a catalog name is optional and is only required 
in some modes.
+This is a standard method of displaying optional arguments for all GNU 
software.
 
 @node --help, Man pages, --usage, Getting help
 @subsection @option{--help}
 
 @vindex --help
-If the command-line includes this option, the program will not be
-run. It will print a complete list of all available options along with
-a short explanation. The options are also grouped by their
-context. Within each context, the options are sorted
-alphabetically. Since the options are shown in detail afterwards, the
-first line of the @option{--help} output shows the arguments and if
-they are optional or not, similar to @ref{--usage}.
-
-In the @option{--help} output of all programs in Gnuastro, the
-options for each program are classified based on context. The first
-two contexts are always options to do with the input and output
-respectively. For example input image extensions or supplementary
-input files for the inputs. The last class of options is also fixed in
-all of Gnuastro, it shows operating mode options. Most of these
-options are already explained in @ref{Operating mode options}.
+If the command-line includes this option, the program will not be run.
+It will print a complete list of all available options along with a short 
explanation.
+The options are also grouped by their context.
+Within each context, the options are sorted alphabetically.
+Since the options are shown in detail afterwards, the first line of the 
@option{--help} output shows the arguments and if they are optional or not, 
similar to @ref{--usage}.
+
+In the @option{--help} output of all programs in Gnuastro, the options for 
each program are classified based on context.
+The first two contexts are always options to do with the input and output 
respectively.
+For example input image extensions or supplementary input files for the inputs.
+The last class of options is also fixed in all of Gnuastro, it shows operating 
mode options.
+Most of these options are already explained in @ref{Operating mode options}.
 
 @cindex Long outputs
 @cindex Redirection of output
 @cindex Command-line, long outputs
-The help message will sometimes be longer than the vertical size of
-your terminal. If you are using a graphical user interface terminal
-emulator, you can scroll the terminal with your mouse, but we promised
-no mice distractions! So here are some suggestions:
+The help message will sometimes be longer than the vertical size of your 
terminal.
+If you are using a graphical user interface terminal emulator, you can scroll 
the terminal with your mouse, but we promised no mice distractions! So here are 
some suggestions:
 
 @itemize
 @item
 @cindex Scroll command-line
 @cindex Command-line scroll
 @cindex @key{Shift + PageUP} and @key{Shift + PageDown}
-@key{Shift + PageUP} to scroll up and @key{Shift + PageDown} to scroll
-down. For most help output this should be enough. The problem is that
-it is limited by the number of lines that your terminal keeps in
-memory and that you can't scroll by lines, only by whole screens.
+@key{Shift + PageUP} to scroll up and @key{Shift + PageDown} to scroll down.
+For most help output this should be enough.
+The problem is that it is limited by the number of lines that your terminal 
keeps in memory and that you can't scroll by lines, only by whole screens.
 
 @item
 @cindex Pipe
 @cindex @command{less}
-Pipe to @command{less}. A pipe is a form of shell re-direction. The
-@command{less} tool in Unix-like systems was made exactly for such
-outputs of any length. You can pipe (@command{|}) the output of any
-program that is longer than the screen to it and then you can scroll
-through (up and down) with its many tools. For example:
+Pipe to @command{less}.
+A pipe is a form of shell re-direction.
+The @command{less} tool in Unix-like systems was made exactly for such outputs 
of any length.
+You can pipe (@command{|}) the output of any program that is longer than the 
screen to it and then you can scroll through (up and down) with its many tools.
+For example:
 @example
 $ astnoisechisel --help | less
 @end example
 @noindent
-Once you have gone through the text, you can quit @command{less} by
-pressing the @key{q} key.
+Once you have gone through the text, you can quit @command{less} by pressing 
the @key{q} key.
 
 
 @item
 @cindex Save output to file
 @cindex Redirection of output
-Redirect to a file. This is a less convenient way, because you will
-then have to open the file in a text editor! You can do this with the
-shell redirection tool (@command{>}):
+Redirect to a file.
+This is a less convenient way, because you will then have to open the file in 
a text editor!
+You can do this with the shell redirection tool (@command{>}):
 @example
 $ astnoisechisel --help > filename.txt
 @end example
@@ -8779,10 +6568,9 @@ $ astnoisechisel --help > filename.txt
 @cindex GNU Grep
 @cindex Searching text
 @cindex Command-line searching text
-In case you have a special keyword you are looking for in the help, you
-don't have to go through the full list. GNU Grep is made for this job. For
-example if you only want the list of options whose @option{--help} output
-contains the word ``axis'' in Crop, you can run the following command:
+In case you have a special keyword you are looking for in the help, you don't 
have to go through the full list.
+GNU Grep is made for this job.
+For example if you only want the list of options whose @option{--help} output 
contains the word ``axis'' in Crop, you can run the following command:
 
 @example
 $ astcrop --help | grep axis
@@ -8792,42 +6580,28 @@ $ astcrop --help | grep axis
 @cindex Argp argument parser
 @cindex Customize @option{--help} output
 @cindex @option{--help} output customization
-If the output of this option does not fit nicely within the confines
-of your terminal, GNU does enable you to customize its output through
-the environment variable @code{ARGP_HELP_FMT}, you can set various
-parameters which specify the formatting of the help messages. For
-example if your terminals are wider than 70 spaces (say 100) and you
-feel there is too much empty space between the long options and the
-short explanation, you can change these formats by giving values to
-this environment variable before running the program with the
-@option{--help} output. You can define this environment variable in
-this manner:
+If the output of this option does not fit nicely within the confines of your 
terminal, GNU does enable you to customize its output through the environment 
variable @code{ARGP_HELP_FMT}, you can set various parameters which specify the 
formatting of the help messages.
+For example if your terminals are wider than 70 spaces (say 100) and you feel 
there is too much empty space between the long options and the short 
explanation, you can change these formats by giving values to this environment 
variable before running the program with the @option{--help} output.
+You can define this environment variable in this manner:
 @example
 $ export ARGP_HELP_FMT=rmargin=100,opt-doc-col=20
 @end example
 @cindex @file{.bashrc}
-This will affect all GNU programs using GNU C library's @file{argp.h}
-facilities as long as the environment variable is in memory. You can
-see the full list of these formatting parameters in the ``Argp User
-Customization'' part of the GNU C library manual. If you are more
-comfortable to read the @option{--help} outputs of all GNU software in
-your customized format, you can add your customization (similar to
-the line above, without the @command{$} sign) to your @file{~/.bashrc}
-file. This is a standard option for all GNU software.
+This will affect all GNU programs using GNU C library's @file{argp.h} 
facilities as long as the environment variable is in memory.
+You can see the full list of these formatting parameters in the ``Argp User 
Customization'' part of the GNU C library manual.
+If you are more comfortable to read the @option{--help} outputs of all GNU 
software in your customized format, you can add your customization (similar to 
the line above, without the @command{$} sign) to your @file{~/.bashrc} file.
+This is a standard option for all GNU software.
 
 @node Man pages, Info, --help, Getting help
 @subsection Man pages
 @cindex Man pages
-Man pages were the Unix method of providing command-line documentation
-to a program. With GNU Info, see @ref{Info} the usage of this method
-of documentation is highly discouraged. This is because Info provides
-a much more easier to navigate and read environment.
+Man pages were the Unix method of providing command-line documentation to a 
program.
+With GNU Info, see @ref{Info} the usage of this method of documentation is 
highly discouraged.
+This is because Info provides a much more easier to navigate and read 
environment.
 
-However, some operating systems require a man page for packages that
-are installed and some people are still used to this method of command
-line help. So the programs in Gnuastro also have Man pages which are
-automatically generated from the outputs of @option{--version} and
-@option{--help} using the GNU help2man program. So if you run
+However, some operating systems require a man page for packages that are 
installed and some people are still used to this method of command line help.
+So the programs in Gnuastro also have Man pages which are automatically 
generated from the outputs of @option{--version} and @option{--help} using the 
GNU help2man program.
+So if you run
 @example
 $ man programname
 @end example
@@ -8844,19 +6618,12 @@ standard manner.
 
 @cindex GNU Info
 @cindex Command-line, viewing full book
-Info is the standard documentation format for all GNU software. It is
-a very useful command-line document viewing format, fully equipped
-with links between the various pages and menus and search
-capabilities. As explained before, the best thing about it is that it
-is available for you the moment you need to refresh your memory on any
-command-line tool in the middle of your work without having to take
-your hands off the keyboard. This complete book is available in Info
-format and can be accessed from anywhere on the command-line.
+Info is the standard documentation format for all GNU software.
+It is a very useful command-line document viewing format, fully equipped with 
links between the various pages and menus and search capabilities.
+As explained before, the best thing about it is that it is available for you 
the moment you need to refresh your memory on any command-line tool in the 
middle of your work without having to take your hands off the keyboard.
+This complete book is available in Info format and can be accessed from 
anywhere on the command-line.
 
-To open the Info format of any installed programs or library on your
-system which has an Info format book, you can simply run the command
-below (change @command{executablename} to the executable name of the
-program or library):
+To open the Info format of any installed programs or library on your system 
which has an Info format book, you can simply run the command below (change 
@command{executablename} to the executable name of the program or library):
 
 @example
 $ info executablename
@@ -8865,23 +6632,16 @@ $ info executablename
 @noindent
 @cindex Learning GNU Info
 @cindex GNU software documentation
-In case you are not already familiar with it, run @command{$ info
-info}. It does a fantastic job in explaining all its capabilities its
-self. It is very short and you will become sufficiently fluent in
-about half an hour. Since all GNU software documentation is also
-provided in Info, your whole GNU/Linux life will significantly
-improve.
+In case you are not already familiar with it, run @command{$ info info}.
+It does a fantastic job in explaining all its capabilities its self.
+It is very short and you will become sufficiently fluent in about half an hour.
+Since all GNU software documentation is also provided in Info, your whole 
GNU/Linux life will significantly improve.
 
 @cindex GNU Emacs
 @cindex GNU C library
-Once you've become an efficient navigator in Info, you can go to any
-part of this book or any other GNU software or library manual, no
-matter how long it is, in a matter of seconds. It also blends nicely
-with GNU Emacs (a text editor) and you can search manuals while you
-are writing your document or programs without taking your hands off
-the keyboard, this is most useful for libraries like the GNU C
-library. To be able to access all the Info manuals installed in your
-GNU/Linux within Emacs, type @key{Ctrl-H + i}.
+Once you've become an efficient navigator in Info, you can go to any part of 
this book or any other GNU software or library manual, no matter how long it 
is, in a matter of seconds.
+It also blends nicely with GNU Emacs (a text editor) and you can search 
manuals while you are writing your document or programs without taking your 
hands off the keyboard, this is most useful for libraries like the GNU C 
library.
+To be able to access all the Info manuals installed in your GNU/Linux within 
Emacs, type @key{Ctrl-H + i}.
 
 To see this whole book from the beginning in Info, you can run
 
@@ -8898,18 +6658,16 @@ $ info astprogramname
 @end example
 
 @noindent
-you will be taken to the section titled ``Invoking ProgramName'' which
-explains the inputs and outputs along with the command-line options for
-that program. Finally, if you run Info with the official program name, for
-example Crop or NoiseChisel:
+you will be taken to the section titled ``Invoking ProgramName'' which 
explains the inputs and outputs along with the command-line options for that 
program.
+Finally, if you run Info with the official program name, for example Crop or 
NoiseChisel:
 
 @example
 $ info ProgramName
 @end example
 
 @noindent
-you will be taken to the top section which introduces the
-program. Note that in all cases, Info is not case sensitive.
+you will be taken to the top section which introduces the program.
+Note that in all cases, Info is not case sensitive.
 
 
 
@@ -8918,23 +6676,17 @@ program. Note that in all cases, Info is not case 
sensitive.
 
 @cindex help-gnuastro mailing list
 @cindex Mailing list: help-gnuastro
-Gnuastro maintains the help-gnuastro mailing list for users to ask any
-questions related to Gnuastro. The experienced Gnuastro users and some
-of its developers are subscribed to this mailing list and your email
-will be sent to them immediately. However, when contacting this
-mailing list please have in mind that they are possibly very busy and
-might not be able to answer immediately.
+Gnuastro maintains the help-gnuastro mailing list for users to ask any 
questions related to Gnuastro.
+The experienced Gnuastro users and some of its developers are subscribed to 
this mailing list and your email will be sent to them immediately.
+However, when contacting this mailing list please have in mind that they are 
possibly very busy and might not be able to answer immediately.
 
 @cindex Mailing list archives
 @cindex @code{help-gnuastro@@gnu.org}
-To ask a question from this mailing list, send a mail to
-@code{help-gnuastro@@gnu.org}. Anyone can view the mailing list
-archives at @url{http://lists.gnu.org/archive/html/help-gnuastro/}. It
-is best that before sending a mail, you search the archives to see if
-anyone has asked a question similar to yours. If you want to make a
-suggestion or report a bug, please don't send a mail to this mailing
-list. We have other mailing lists and tools for those purposes, see
-@ref{Report a bug} or @ref{Suggest new feature}.
+To ask a question from this mailing list, send a mail to 
@code{help-gnuastro@@gnu.org}.
+Anyone can view the mailing list archives at 
@url{http://lists.gnu.org/archive/html/help-gnuastro/}.
+It is best that before sending a mail, you search the archives to see if 
anyone has asked a question similar to yours.
+If you want to make a suggestion or report a bug, please don't send a mail to 
this mailing list.
+We have other mailing lists and tools for those purposes, see @ref{Report a 
bug} or @ref{Suggest new feature}.
 
 
 
@@ -8948,80 +6700,54 @@ list. We have other mailing lists and tools for those 
purposes, see
 @node Installed scripts, Multi-threaded operations, Getting help, Common 
program behavior
 @section Installed scripts
 
-Gnuastro's programs (introduced in previous chapters) are designed to be
-highly modular and thus mainly contain lower-level operations on the
-data. However, in many contexts, higher-level operations (for example a
-sequence of calls to multiple Gnuastro programs, or a special way of
-running a program and using the outputs) are also very similar between
-various projects.
+Gnuastro's programs (introduced in previous chapters) are designed to be 
highly modular and thus mainly contain lower-level operations on the data.
+However, in many contexts, higher-level operations (for example a sequence of 
calls to multiple Gnuastro programs, or a special way of running a program and 
using the outputs) are also very similar between various projects.
 
-To facilitate data analysis on these higher-level steps also, Gnuastro also
-installs some scripts on your system with the (@code{astscript-}) prefix
-(in contrast to the other programs that only have the @code{ast}
-prefix).
+To facilitate data analysis on these higher-level steps also, Gnuastro also 
installs some scripts on your system with the (@code{astscript-}) prefix (in 
contrast to the other programs that only have the @code{ast} prefix).
 
 @cindex GNU Bash
-Like all of Gnuastro's source code, these scripts are also heavily
-commented. They are written in GNU Bash, which doesn't need
-compilation. Therefore, if you open the installed scripts in a text editor,
-you can actually read them@footnote{Gnuastro's installed programs (those
-only starting with @code{ast}) aren't human-readable. They are written in C
-and are thus compiled (optimized in binary CPU instructions that will be
-given directly to your CPU). Because they don't need an interpreter like
-Bash on every run, they are much faster and more independent than
-scripts. To read the source code of the programs, look into the
-@file{bin/progname} directory of Gnuastro's source (@ref{Downloading the
-source}). If you would like to read more about why C was chosen for the
-programs, please see @ref{Why C}.}. Bash is the same language that is
-mainly used when typing on the command-line. Because of these factors, Bash
-is much more widely known and used than C (the language of other Gnuastro
-programs). Gnuastro's installed scripts also do higher-level operations, so
-customizing these scripts for a special project will be more common than
-the programs. You can always inspect them (to customize, check, or educate
-your self) with this command (just replace @code{emacs} with your favorite
-text editor):
+Like all of Gnuastro's source code, these scripts are also heavily commented.
+They are written in GNU Bash, which doesn't need compilation.
+Therefore, if you open the installed scripts in a text editor, you can 
actually read them@footnote{Gnuastro's installed programs (those only starting 
with @code{ast}) aren't human-readable.
+They are written in C and are thus compiled (optimized in binary CPU 
instructions that will be given directly to your CPU).
+Because they don't need an interpreter like Bash on every run, they are much 
faster and more independent than scripts.
+To read the source code of the programs, look into the @file{bin/progname} 
directory of Gnuastro's source (@ref{Downloading the source}).
+If you would like to read more about why C was chosen for the programs, please 
see @ref{Why C}.}.
+Bash is the same language that is mainly used when typing on the command-line.
+Because of these factors, Bash is much more widely known and used than C (the 
language of other Gnuastro programs).
+Gnuastro's installed scripts also do higher-level operations, so customizing 
these scripts for a special project will be more common than the programs.
+You can always inspect them (to customize, check, or educate your self) with 
this command (just replace @code{emacs} with your favorite text editor):
 
 @example
 $ emacs $(which astscript-NAME)
 @end example
 
-These scripts also accept options and are in many ways similar to the
-programs (see @ref{Common options}) with some minor differences:
+These scripts also accept options and are in many ways similar to the programs 
(see @ref{Common options}) with some minor differences:
 
 @itemize
 @item
-Currently they don't accept configuration files themselves. However, the
-configuration files of the Gnuastro programs they call are indeed parsed
-and used by those programs.
+Currently they don't accept configuration files themselves.
+However, the configuration files of the Gnuastro programs they call are indeed 
parsed and used by those programs.
 
-As a result, they don't have the following options: @option{--checkconfig},
-@option{--config}, @option{--lastconfig}, @option{--onlyversion},
-@option{--printparams}, @option{--setdirconf} and @option{--setusrconf}.
+As a result, they don't have the following options: @option{--checkconfig}, 
@option{--config}, @option{--lastconfig}, @option{--onlyversion}, 
@option{--printparams}, @option{--setdirconf} and @option{--setusrconf}.
 
 @item
-They don't directly allocate any memory, so there is no
-@option{--minmapsize}.
+They don't directly allocate any memory, so there is no @option{--minmapsize}.
 
 @item
-They don't have an independent @option{--usage} option: when called with
-@option{--usage}, they just recommend running @option{--help}.
+They don't have an independent @option{--usage} option: when called with 
@option{--usage}, they just recommend running @option{--help}.
 
 @item
-The output of @option{--help} is not configurable like the programs (see
-@ref{--help}).
+The output of @option{--help} is not configurable like the programs (see 
@ref{--help}).
 
 @item
 @cindex GNU AWK
 @cindex GNU SED
-The scripts will commonly use your installed Bash and other basic
-command-line tools (for example AWK or SED). Different systems have
-different versions and implementations of these basic tools (for example
-GNU/Linux systems use GNU AWK and GNU SED which are far more advanced and
-up to date then the minimalist AWK and SED of most other
-systems). Therefore, unexpected errors in these tools might come up when
-you run these scripts. We will try our best to write these scripts in a
-portable way. However, if you do confront such strange errors, please
-submit a bug report so we fix it (see @ref{Report a bug}).
+The scripts will commonly use your installed Bash and other basic command-line 
tools (for example AWK or SED).
+Different systems have different versions and implementations of these basic 
tools (for example GNU/Linux systems use GNU AWK and GNU SED which are far more 
advanced and up to date then the minimalist AWK and SED of most other systems).
+Therefore, unexpected errors in these tools might come up when you run these 
scripts.
+We will try our best to write these scripts in a portable way.
+However, if you do confront such strange errors, please submit a bug report so 
we fix it (see @ref{Report a bug}).
 
 @end itemize
 
@@ -9046,26 +6772,18 @@ submit a bug report so we fix it (see @ref{Report a 
bug}).
 @cindex Multi-threaded programs
 @cindex Using multiple CPU cores
 @cindex Simultaneous multithreading
-Some of the programs benefit significantly when you use all the threads
-your computer's CPU has to offer to your operating system. The number of
-threads available can be larger than the number of physical (hardware)
-cores in the CPU (also known as Simultaneous multithreading). For example,
-in Intel's CPUs (those that implement its Hyper-threading technology) the
-number of threads is usually double the number of physical cores in your
-CPU. On a GNU/Linux system, the number of threads available can be found
-with the command @command{$ nproc} command (part of GNU Coreutils).
+Some of the programs benefit significantly when you use all the threads your 
computer's CPU has to offer to your operating system.
+The number of threads available can be larger than the number of physical 
(hardware) cores in the CPU (also known as Simultaneous multithreading).
+For example, in Intel's CPUs (those that implement its Hyper-threading 
technology) the number of threads is usually double the number of physical 
cores in your CPU.
+On a GNU/Linux system, the number of threads available can be found with the 
command @command{$ nproc} command (part of GNU Coreutils).
 
 @vindex --numthreads
 @cindex Number of threads available
 @cindex Available number of threads
 @cindex Internally stored option value
-Gnuastro's programs can find the number of threads available to your system
-internally at run-time (when you execute the program). However, if a value
-is given to the @option{--numthreads} option, the given number will be
-used, see @ref{Operating mode options} and @ref{Configuration files} for ways 
to
-use this option. Thus @option{--numthreads} is the only common option in
-Gnuastro's programs with a value that doesn't have to be specified anywhere
-on the command-line or in the configuration files.
+Gnuastro's programs can find the number of threads available to your system 
internally at run-time (when you execute the program).
+However, if a value is given to the @option{--numthreads} option, the given 
number will be used, see @ref{Operating mode options} and @ref{Configuration 
files} for ways to use this option.
+Thus @option{--numthreads} is the only common option in Gnuastro's programs 
with a value that doesn't have to be specified anywhere on the command-line or 
in the configuration files.
 
 @menu
 * A note on threads::           Caution and suggestion on using threads.
@@ -9078,38 +6796,23 @@ on the command-line or in the configuration files.
 @cindex Using multiple threads
 @cindex Best use of CPU threads
 @cindex Efficient use of CPU threads
-Spinning off threads is not necessarily the most efficient way to run an
-application. Creating a new thread isn't a cheap operation for the
-operating system. It is most useful when the input data are fixed and you
-want the same operation to be done on parts of it. For example one input
-image to Crop and multiple crops from various parts of it. In this fashion,
-the image is loaded into memory once, all the crops are divided between the
-number of threads internally and each thread cuts out those parts which are
-assigned to it from the same image. On the other hand, if you have multiple
-images and you want to crop the same region(s) out of all of them, it is
-much more efficient to set @option{--numthreads=1} (so no threads spin off)
-and run Crop multiple times simultaneously, see @ref{How to run
-simultaneous operations}.
+Spinning off threads is not necessarily the most efficient way to run an 
application.
+Creating a new thread isn't a cheap operation for the operating system.
+It is most useful when the input data are fixed and you want the same 
operation to be done on parts of it.
+For example one input image to Crop and multiple crops from various parts of 
it.
+In this fashion, the image is loaded into memory once, all the crops are 
divided between the number of threads internally and each thread cuts out those 
parts which are assigned to it from the same image.
+On the other hand, if you have multiple images and you want to crop the same 
region(s) out of all of them, it is much more efficient to set 
@option{--numthreads=1} (so no threads spin off) and run Crop multiple times 
simultaneously, see @ref{How to run simultaneous operations}.
 
 @cindex Wall-clock time
-You can check the boost in speed by first running a program on one of the
-data sets with the maximum number of threads and another time (with
-everything else the same) and only using one thread. You will notice that
-the wall-clock time (reported by most programs at their end) in the former
-is longer than the latter divided by number of physical CPU cores (not
-threads) available to your operating system. Asymptotically these two times
-can be equal (most of the time they aren't). So limiting the programs to
-use only one thread and running them independently on the number of
-available threads will be more efficient.
+You can check the boost in speed by first running a program on one of the data 
sets with the maximum number of threads and another time (with everything else 
the same) and only using one thread.
+You will notice that the wall-clock time (reported by most programs at their 
end) in the former is longer than the latter divided by number of physical CPU 
cores (not threads) available to your operating system.
+Asymptotically these two times can be equal (most of the time they aren't).
+So limiting the programs to use only one thread and running them independently 
on the number of available threads will be more efficient.
 
 @cindex System Cache
 @cindex Cache, system
-Note that the operating system keeps a cache of recently processed
-data, so usually, the second time you process an identical data set
-(independent of the number of threads used), you will get faster
-results. In order to make an unbiased comparison, you have to first
-clean the system's cache with the following command between the two
-runs.
+Note that the operating system keeps a cache of recently processed data, so 
usually, the second time you process an identical data set (independent of the 
number of threads used), you will get faster results.
+In order to make an unbiased comparison, you have to first clean the system's 
cache with the following command between the two runs.
 
 @example
 $ sync; echo 3 | sudo tee /proc/sys/vm/drop_caches
@@ -9120,14 +6823,10 @@ $ sync; echo 3 | sudo tee /proc/sys/vm/drop_caches
 @strong{SUMMARY: Should I use multiple threads?} Depends:
 @itemize
 @item
-If you only have @strong{one} data set (image in most cases!), then
-yes, the more threads you use (with a maximum of the number of threads
-available to your OS) the faster you will get your results.
+If you only have @strong{one} data set (image in most cases!), then yes, the 
more threads you use (with a maximum of the number of threads available to your 
OS) the faster you will get your results.
 
 @item
-If you want to run the same operation on @strong{multiple} data sets, it is
-best to set the number of threads to 1 and use Make, or GNU Parallel, as
-explained in @ref{How to run simultaneous operations}.
+If you want to run the same operation on @strong{multiple} data sets, it is 
best to set the number of threads to 1 and use Make, or GNU Parallel, as 
explained in @ref{How to run simultaneous operations}.
 @end itemize
 @end cartouche
 
@@ -9138,37 +6837,25 @@ explained in @ref{How to run simultaneous operations}.
 @node How to run simultaneous operations,  , A note on threads, Multi-threaded 
operations
 @subsection How to run simultaneous operations
 
-There are two@footnote{A third way would be to open multiple terminal
-emulator windows in your GUI, type the commands separately on each and
-press @key{Enter} once on each terminal, but this is far too frustrating,
-tedious and prone to errors. It's therefore not a realistic solution when
-tens, hundreds or thousands of operations (your research targets,
-multiplied by the operations you do on each) are to be done.} approaches to
-simultaneously execute a program: using GNU Parallel or Make (GNU Make is
-the most common implementation). The first is very useful when you only
-want to do one job multiple times and want to get back to your work without
-actually keeping the command you ran. The second is usually for more
-important operations, with lots of dependencies between the different
-products (for example a full scientific research).
+There are two@footnote{A third way would be to open multiple terminal emulator 
windows in your GUI, type the commands separately on each and press @key{Enter} 
once on each terminal, but this is far too frustrating, tedious and prone to 
errors.
+It's therefore not a realistic solution when tens, hundreds or thousands of 
operations (your research targets, multiplied by the operations you do on each) 
are to be done.} approaches to simultaneously execute a program: using GNU 
Parallel or Make (GNU Make is the most common implementation).
+The first is very useful when you only want to do one job multiple times and 
want to get back to your work without actually keeping the command you ran.
+The second is usually for more important operations, with lots of dependencies 
between the different products (for example a full scientific research).
 
 @table @asis
 
 @item GNU Parallel
 @cindex GNU Parallel
-When you only want to run multiple instances of a command on different
-threads and get on with the rest of your work, the best method is to
-use GNU parallel. Surprisingly GNU Parallel is one of the few GNU
-packages that has no Info documentation but only a Man page, see
-@ref{Info}. So to see the documentation after installing it please run
+When you only want to run multiple instances of a command on different threads 
and get on with the rest of your work, the best method is to use GNU parallel.
+Surprisingly GNU Parallel is one of the few GNU packages that has no Info 
documentation but only a Man page, see @ref{Info}.
+So to see the documentation after installing it please run
 
 @example
 $ man parallel
 @end example
 @noindent
-As an example, let's assume we want to crop a region fixed on the
-pixels (500, 600) with the default width from all the FITS images in
-the @file{./data} directory ending with @file{sci.fits} to the current
-directory. To do this, you can run:
+As an example, let's assume we want to crop a region fixed on the pixels (500, 
600) with the default width from all the FITS images in the @file{./data} 
directory ending with @file{sci.fits} to the current directory.
+To do this, you can run:
 
 @example
 $ parallel astcrop --numthreads=1 --xc=500 --yc=600 ::: \
@@ -9176,62 +6863,38 @@ $ parallel astcrop --numthreads=1 --xc=500 --yc=600 ::: 
\
 @end example
 
 @noindent
-GNU Parallel can help in many more conditions, this is one of the
-simplest, see the man page for lots of other examples. For absolute
-beginners: the backslash (@command{\}) is only a line breaker to fit
-nicely in the page. If you type the whole command in one line, you
-should remove it.
+GNU Parallel can help in many more conditions, this is one of the simplest, 
see the man page for lots of other examples.
+For absolute beginners: the backslash (@command{\}) is only a line breaker to 
fit nicely in the page.
+If you type the whole command in one line, you should remove it.
 
 @item Make
 @cindex Make
-Make is a program for building ``targets'' (e.g., files) using ``recipes''
-(a set of operations) when their known ``prerequisites'' (other files) have
-been updated. It elegantly allows you to define dependency structures for
-building your final output and updating it efficiently when the inputs
-change. It is the most common infra-structure to build software
-today.
-
-Scientific research methodology is very similar to software development:
-you start by testing a hypothesis on a small sample of objects/targets with
-a simple set of steps. As you are able to get promising results, you
-improve the method and use it on a larger, more general, sample. In the
-process, you will confront many issues that have to be corrected (bugs in
-software development jargon). Make a wonderful tool to manage this style of
-development. It has been used to make reproducible papers, for example see
-@url{https://gitlab.com/makhlaghi/NoiseChisel-paper, the reproduction
-pipeline} of the paper introducing @ref{NoiseChisel} (one of Gnuastro's
-programs).
+Make is a program for building ``targets'' (e.g., files) using ``recipes'' (a 
set of operations) when their known ``prerequisites'' (other files) have been 
updated.
+It elegantly allows you to define dependency structures for building your 
final output and updating it efficiently when the inputs change.
+It is the most common infra-structure to build software today.
+
+Scientific research methodology is very similar to software development: you 
start by testing a hypothesis on a small sample of objects/targets with a 
simple set of steps.
+As you are able to get promising results, you improve the method and use it on 
a larger, more general, sample.
+In the process, you will confront many issues that have to be corrected (bugs 
in software development jargon).
+Make a wonderful tool to manage this style of development.
+It has been used to make reproducible papers, for example see 
@url{https://gitlab.com/makhlaghi/NoiseChisel-paper, the reproduction pipeline} 
of the paper introducing @ref{NoiseChisel} (one of Gnuastro's programs).
 
 @cindex GNU Make
-GNU Make@footnote{@url{https://www.gnu.org/software/make/}} is the most
-common implementation which (similar to nearly all GNU programs, comes with
-a wonderful
-manual@footnote{@url{https://www.gnu.org/software/make/manual/}}). Make is
-very basic and simple, and thus the manual is short (the most important
-parts are in the first roughly 100 pages) and easy to read/understand.
+GNU Make@footnote{@url{https://www.gnu.org/software/make/}} is the most common 
implementation which (similar to nearly all GNU programs, comes with a 
wonderful manual@footnote{@url{https://www.gnu.org/software/make/manual/}}).
+Make is very basic and simple, and thus the manual is short (the most 
important parts are in the first roughly 100 pages) and easy to read/understand.
 
-Make comes with a @option{--jobs} (@option{-j}) option which allows you to
-specify the maximum number of jobs that can be done simultaneously. For
-example if you have 8 threads available to your operating system. You can
-run:
+Make comes with a @option{--jobs} (@option{-j}) option which allows you to 
specify the maximum number of jobs that can be done simultaneously.
+For example if you have 8 threads available to your operating system.
+You can run:
 
 @example
 $ make -j8
 @end example
 
-With this command, Make will process your @file{Makefile} and create all
-the targets (can be thousands of FITS images for example) simultaneously on
-8 threads, while fully respecting their dependencies (only building a
-file/target when its prerequisites are successfully built). Make is thus
-strongly recommended for managing scientific research where robustness,
-archiving, reproducibility and speed@footnote{Besides its multi-threaded
-capabilities, Make will only re-build those targets that depend on a change
-you have made, not the whole work. For example, if you have set the
-prerequisites properly, you can easily test the changing of a parameter on
-your paper's results without having to re-do everything (which is much
-faster). This allows you to be much more productive in easily checking
-various ideas/assumptions of the different stages of your research and thus
-produce a more robust result for your exciting science.} are important.
+With this command, Make will process your @file{Makefile} and create all the 
targets (can be thousands of FITS images for example) simultaneously on 8 
threads, while fully respecting their dependencies (only building a file/target 
when its prerequisites are successfully built).
+Make is thus strongly recommended for managing scientific research where 
robustness, archiving, reproducibility and speed@footnote{Besides its 
multi-threaded capabilities, Make will only re-build those targets that depend 
on a change you have made, not the whole work.
+For example, if you have set the prerequisites properly, you can easily test 
the changing of a parameter on your paper's results without having to re-do 
everything (which is much faster).
+This allows you to be much more productive in easily checking various 
ideas/assumptions of the different stages of your research and thus produce a 
more robust result for your exciting science.} are important.
 
 @end table
 
@@ -9244,60 +6907,48 @@ produce a more robust result for your exciting 
science.} are important.
 
 @cindex Bit
 @cindex Type
-At the lowest level, the computer stores everything in terms of @code{1} or
-@code{0}. For example, each program in Gnuastro, or each astronomical image
-you take with the telescope is actually a string of millions of these zeros
-and ones. The space required to keep a zero or one is the smallest unit of
-storage, and is known as a @emph{bit}. However, understanding and
-manipulating this string of bits is extremely hard for most
-people. Therefore, we define packages of these bits along with a standard
-on how to interpret the bits in each package as a @emph{type}.
+At the lowest level, the computer stores everything in terms of @code{1} or 
@code{0}.
+For example, each program in Gnuastro, or each astronomical image you take 
with the telescope is actually a string of millions of these zeros and ones.
+The space required to keep a zero or one is the smallest unit of storage, and 
is known as a @emph{bit}.
+However, understanding and manipulating this string of bits is extremely hard 
for most people.
+Therefore, different standards are defined to package the bits into separate 
@emph{type}s with a fixed interpretation of the bits in each package.
 
 @cindex Byte
 @cindex Signed integer
 @cindex Unsigned integer
 @cindex Integer, Signed
-The most basic standard for reading the bits is integer numbers
-(@mymath{..., -2, -1, 0, 1, 2, ...}, more bits will give larger
-limits). The common integer types are 8, 16, 32, and 64 bits wide. For each
-width, there are two standards for reading the bits: signed and unsigned
-integers. In the former, negative numbers are allowed and in the latter,
-they aren't. The @code{unsigned} types thus have larger positive limits
-(one extra bit), but no negative value. When the context of your work
-doesn't involve negative numbers (for example counting, where negative is
-not defined), it is best to use the @code{unsigned} types. For full
-numerical range of all integer types, see below.
-
-Another standard of converting a given number of bits to numbers is the
-floating point standard, this standard can approximately store any real
-number with a given precision. There are two common floating point types:
-32-bit and 64-bit, for single and double precision floating point numbers
-respectively. The former is sufficient for data with less than 8
-significant decimal digits (most astronomical data), while the latter is
-good for less than 16 significant decimal digits. The representation of
-real numbers as bits is much more complex than integers. If you are
-interested, you can start with the
-@url{https://en.wikipedia.org/wiki/Floating_point, Wikipedia article}.
-
-With the conversion operators in Gnuastro's Arithmetic, you can change the
-types of data to each other, which is necessary in some contexts. For
-example the program/library, that you intend to feed the data into, only
-accepts floating point values, but you have an integer image. Another
-situation that conversion can be helpful is when you know that your data
-only has values that fit within @code{int8} or @code{uint16}. However it is
-currently formatted in the @code{float64} type. Operations involving
-floating point or larger integer types are significantly slower than
-integer or smaller-width types respectively. In the latter case, it also
-requires much more (by 8 or 4 times in the example above) storage space. So
-when you confront such situations and want to store/archive/transfter the
-data, it is best convert them to the most efficient type.
-
-The short and long names for the recognized numeric data types in Gnuastro
-are listed below. Both short and long names can be used when you want to
-specify a type. For example, as a value to the common option
-@option{--type} (see @ref{Input output options}), or in the information
-comment lines of @ref{Gnuastro text table format}. The ranges listed below
-are inclusive.
+To store numbers, the most basic standard/type is for integers (@mymath{..., 
-2, -1, 0, 1, 2, ...}).
+The common integer types are 8, 16, 32, and 64 bits wide (more bits will give 
larger limits).
+Each bit corresponds to a power of 2 and they are summed to create the final 
number.
+In the integer types, for each width there are two standards for reading the 
bits: signed and unsigned.
+In the `signed' convention, one bit is reserved for the sign (stating that the 
integer is positive or negative).
+The `unsigned' integers use that bit in the actual number and thus contain 
only positive numbers (starting from zero).
+
+Therefore, at the same number of bits, both signed and unsigned integers can 
allow the same number of integers, but the positive limit of the 
@code{unsigned} types is double their @code{signed} counterparts with the same 
width (at the expense of not having negative numbers).
+When the context of your work doesn't involve negative numbers (for example 
counting, where negative is not defined), it is best to use the @code{unsigned} 
types.
+For the full numerical range of all integer types, see below.
+
+Another standard of converting a given number of bits to numbers is the 
floating point standard, this standard can @emph{approximately} store any real 
number with a given precision.
+There are two common floating point types: 32-bit and 64-bit, for single and 
double precision floating point numbers respectively.
+The former is sufficient for data with less than 8 significant decimal digits 
(most astronomical data), while the latter is good for less than 16 significant 
decimal digits.
+The representation of real numbers as bits is much more complex than integers.
+If you are interested to learn more about it, you can start with the 
@url{https://en.wikipedia.org/wiki/Floating_point, Wikipedia article}.
+
+Practically, you can use Gnuastro's Arithmetic program to convert/change the 
type of an image/datacube (see @ref{Arithmetic}), or Gnuastro Table program to 
convert a table column's data type (see @ref{Column arithmetic}).
+Conversion of a dataset's type is necessary in some contexts.
+For example the program/library, that you intend to feed the data into, only 
accepts floating point values, but you have an integer image/column.
+Another situation that conversion can be helpful is when you know that your 
data only has values that fit within @code{int8} or @code{uint16}.
+However it is currently formatted in the @code{float64} type.
+
+The important thing to consider is that operations involving wider, floating 
point, or signed types can be significantly slower than smaller-width, integer, 
or unsigned types respectively.
+Note that besides speed, a wider type also requires much more storage space 
(by 4 or 8 times).
+Therefore, when you confront such situations that can be optimized and want to 
store/archive/transfer the data, it is best to use the most efficient type.
+For example if your dataset (image or table column) only has positive integers 
less than 65535, store it as an unsigned 16-bit integer for faster processing, 
faster transfer, and less storage space.
+
+The short and long names for the recognized numeric data types in Gnuastro are 
listed below.
+Both short and long names can be used when you want to specify a type.
+For example, as a value to the common option @option{--type} (see @ref{Input 
output options}), or in the information comment lines of @ref{Gnuastro text 
table format}.
+The ranges listed below are inclusive.
 
 @table @code
 @item u8
@@ -9342,33 +6993,25 @@ are inclusive.
 
 @item f32
 @itemx float32
-32-bit (single-precision) floating point types. The maximum (minimum is its
-negative) possible value is
-@mymath{3.402823\times10^{38}}. Single-precision floating points can
-accurately represent a floating point number up to @mymath{\sim7.2}
-significant decimals. Given the heavy noise in astronomical data, this is
-usually more than sufficient for storing results.
+32-bit (single-precision) floating point types.
+The maximum (minimum is its negative) possible value is 
@mymath{3.402823\times10^{38}}.
+Single-precision floating points can accurately represent a floating point 
number up to @mymath{\sim7.2} significant decimals.
+Given the heavy noise in astronomical data, this is usually more than 
sufficient for storing results.
 
 @item f64
 @itemx float64
-64-bit (double-precision) floating point types. The maximum (minimum is its
-negative) possible value is @mymath{\sim10^{308}}. Double-precision
-floating points can accurately represent a floating point number
-@mymath{\sim15.9} significant decimals. This is usually good for processing
-(mixing) the data internally, for example a sum of single precision data
-(and later storing the result as @code{float32}).
+64-bit (double-precision) floating point types.
+The maximum (minimum is its negative) possible value is @mymath{\sim10^{308}}.
+Double-precision floating points can accurately represent a floating point 
number @mymath{\sim15.9} significant decimals.
+This is usually good for processing (mixing) the data internally, for example 
a sum of single precision data (and later storing the result as @code{float32}).
 @end table
 
 @cartouche
 @noindent
-@strong{Some file formats don't recognize all types.} Some file formats
-don't recognize all the types, for example the FITS standard (see
-@ref{Fits}) does not define @code{uint64} in binary tables or images. When
-a type is not acceptable for output into a given file format, the
-respective Gnuastro program or library will let you know and abort. On the
-command-line, you can use the @ref{Arithmetic} program to convert the
-numerical type of a dataset, in the libraries, you can call
-@code{gal_data_copy_to_new_type}.
+@strong{Some file formats don't recognize all types.} For example the FITS 
standard (see @ref{Fits}) does not define @code{uint64} in binary tables or 
images.
+When a type is not acceptable for output into a given file format, the 
respective Gnuastro program or library will let you know and abort.
+On the command-line, you can convert the numerical type of an image, or table 
column into another type with @ref{Arithmetic} or @ref{Table} respectively.
+If you are writing your own program, you can use the 
@code{gal_data_copy_to_new_type()} function in Gnuastro's library, see 
@ref{Copying datasets}.
 @end cartouche
 
 
@@ -9376,51 +7019,30 @@ numerical type of a dataset, in the libraries, you can 
call
 @node Tables, Tessellation, Numeric data types, Common program behavior
 @section Tables
 
-``A table is a collection of related data held in a structured format
-within a database. It consists of columns, and rows.'' (from
-Wikipedia). Each column in the table contains the values of one property
-and each row is a collection of properties (columns) for one target
-object. For example, let's assume you have just ran MakeCatalog (see
-@ref{MakeCatalog}) on an image to measure some properties for the labeled
-regions (which might be detected galaxies for example) in the image. For
-each labeled region (detected galaxy), there will be a @emph{row} which
-groups its measured properties as @emph{columns}, one column for each
-property. One such property can be the object's magnitude, which is the sum
-of pixels with that label, or its center can be defined as the
-light-weighted average value of those pixels. Many such properties can be
-derived from the raw pixel values and their position, see @ref{Invoking
-astmkcatalog} for a long list.
-
-As a summary, for each labeled region (or, galaxy) we have one @emph{row}
-and for each measured property we have one @emph{column}. This high-level
-structure is usually the first step for higher-level analysis, for example
-finding the stellar mass or photometric redshift from magnitudes in
-multiple colors. Thus, tables are not just outputs of programs, in fact it
-is much more common for tables to be inputs of programs. For example, to
-make a mock galaxy image, you need to feed in the properties of each galaxy
-into @ref{MakeProfiles} for it do the inverse of the process above and make
-a simulated image from a catalog, see @ref{Sufi simulates a detection}. In
-other cases, you can feed a table into @ref{Crop} and it will crop out
-regions centered on the positions within the table, see @ref{Finding
-reddest clumps and visual inspection}. So to end this relatively long
-introduction, tables play a very important role in astronomy, or generally
-all branches of data analysis.
-
-In @ref{Recognized table formats} the currently recognized table formats in
-Gnuastro are discussed. You can use any of these tables as input or ask for
-them to be built as output. The most common type of table format is a
-simple plain text file with each row on one line and columns separated by
-white space characters, this format is easy to read/write by eye/hand. To
-give it the full functionality of more specific table types like the FITS
-tables, Gnuastro has a special convention which you can use to give each
-column a name, type, unit, and comments, while still being readable by
-other plain text table readers. This convention is described in
-@ref{Gnuastro text table format}.
-
-When tables are input to a program, the program reading it needs to know
-which column(s) it should use for its desired purposes. Gnuastro's programs
-all follow a similar convention, on the way you can select columns in a
-table. They are thoroughly discussed in @ref{Selecting table columns}.
+``A table is a collection of related data held in a structured format within a 
database.
+It consists of columns, and rows.'' (from Wikipedia).
+Each column in the table contains the values of one property and each row is a 
collection of properties (columns) for one target object.
+For example, let's assume you have just ran MakeCatalog (see 
@ref{MakeCatalog}) on an image to measure some properties for the labeled 
regions (which might be detected galaxies for example) in the image.
+For each labeled region (detected galaxy), there will be a @emph{row} which 
groups its measured properties as @emph{columns}, one column for each property.
+One such property can be the object's magnitude, which is the sum of pixels 
with that label, or its center can be defined as the light-weighted average 
value of those pixels.
+Many such properties can be derived from the raw pixel values and their 
position, see @ref{Invoking astmkcatalog} for a long list.
+
+As a summary, for each labeled region (or, galaxy) we have one @emph{row} and 
for each measured property we have one @emph{column}.
+This high-level structure is usually the first step for higher-level analysis, 
for example finding the stellar mass or photometric redshift from magnitudes in 
multiple colors.
+Thus, tables are not just outputs of programs, in fact it is much more common 
for tables to be inputs of programs.
+For example, to make a mock galaxy image, you need to feed in the properties 
of each galaxy into @ref{MakeProfiles} for it do the inverse of the process 
above and make a simulated image from a catalog, see @ref{Sufi simulates a 
detection}.
+In other cases, you can feed a table into @ref{Crop} and it will crop out 
regions centered on the positions within the table, see @ref{Finding reddest 
clumps and visual inspection}.
+So to end this relatively long introduction, tables play a very important role 
in astronomy, or generally all branches of data analysis.
+
+In @ref{Recognized table formats} the currently recognized table formats in 
Gnuastro are discussed.
+You can use any of these tables as input or ask for them to be built as output.
+The most common type of table format is a simple plain text file with each row 
on one line and columns separated by white space characters, this format is 
easy to read/write by eye/hand.
+To give it the full functionality of more specific table types like the FITS 
tables, Gnuastro has a special convention which you can use to give each column 
a name, type, unit, and comments, while still being readable by other plain 
text table readers.
+This convention is described in @ref{Gnuastro text table format}.
+
+When tables are input to a program, the program reading it needs to know which 
column(s) it should use for its desired purposes.
+Gnuastro's programs all follow a similar convention, on the way you can select 
columns in a table.
+They are thoroughly discussed in @ref{Selecting table columns}.
 
 
 @menu
@@ -9432,92 +7054,57 @@ table. They are thoroughly discussed in @ref{Selecting 
table columns}.
 @node Recognized table formats, Gnuastro text table format, Tables, Tables
 @subsection Recognized table formats
 
-The list of table formats that Gnuastro can currently read from and write
-to are described below. Each has their own advantage and disadvantages, so a
-short review of the format is also provided to help you make the best
-choice based on how you want to define your input tables or later use your
-output tables.
+The list of table formats that Gnuastro can currently read from and write to 
are described below.
+Each has their own advantage and disadvantages, so a short review of the 
format is also provided to help you make the best choice based on how you want 
to define your input tables or later use your output tables.
 
 @table @asis
 
 @item Plain text table
-This is the most basic and simplest way to create, view, or edit the table
-by hand on a text editor. The other formats described below are less
-eye-friendly and have a more formal structure (for easier computer
-readability). It is fully described in @ref{Gnuastro text table format}.
+This is the most basic and simplest way to create, view, or edit the table by 
hand on a text editor.
+The other formats described below are less eye-friendly and have a more formal 
structure (for easier computer readability).
+It is fully described in @ref{Gnuastro text table format}.
 
 @cindex FITS Tables
 @cindex Tables FITS
 @cindex ASCII table, FITS
 @item FITS ASCII tables
-The FITS ASCII table extension is fully in ASCII encoding and thus easily
-readable on any text editor (assuming it is the only extension in the FITS
-file). If the FITS file also contains binary extensions (for example an
-image or binary table extensions), then there will be many hard to print
-characters. The FITS ASCII format doesn't have new line characters to
-separate rows. In the FITS ASCII table standard, each row is defined as a
-fixed number of characters (value to the @code{NAXIS1} keyword), so to
-visually inspect it properly, you would have to adjust your text editor's
-width to this value. All columns start at given character positions and
-have a fixed width (number of characters).
-
-Numbers in a FITS ASCII table are printed into ASCII format, they are not
-in binary (that the CPU uses). Hence, they can take a larger space in
-memory, loose their precision, and take longer to read into memory. If you
-are dealing with integer type columns (see @ref{Numeric data types}),
-another issue with FITS ASCII tables is that the type information for the
-column will be lost (there is only one integer type in FITS ASCII
-tables). One problem with the binary format on the other hand is that it
-isn't portable (different CPUs/compilers) have different standards for
-translating the zeros and ones. But since ASCII characters are defined on a
-byte and are well recognized, they are better for portability on those
-various systems. Gnuastro's plain text table format described below is much
-more portable and easier to read/write/interpret by humans manually.
-
-Generally, as the name implies, this format is useful for when your table
-mainly contains ASCII columns (for example file names, or
-descriptions). They can be useful when you need to include columns with
-structured ASCII information along with other extensions in one FITS
-file. In such cases, you can also consider header keywords (see
-@ref{Fits}).
+The FITS ASCII table extension is fully in ASCII encoding and thus easily 
readable on any text editor (assuming it is the only extension in the FITS 
file).
+If the FITS file also contains binary extensions (for example an image or 
binary table extensions), then there will be many hard to print characters.
+The FITS ASCII format doesn't have new line characters to separate rows.
+In the FITS ASCII table standard, each row is defined as a fixed number of 
characters (value to the @code{NAXIS1} keyword), so to visually inspect it 
properly, you would have to adjust your text editor's width to this value.
+All columns start at given character positions and have a fixed width (number 
of characters).
+
+Numbers in a FITS ASCII table are printed into ASCII format, they are not in 
binary (that the CPU uses).
+Hence, they can take a larger space in memory, loose their precision, and take 
longer to read into memory.
+If you are dealing with integer type columns (see @ref{Numeric data types}), 
another issue with FITS ASCII tables is that the type information for the 
column will be lost (there is only one integer type in FITS ASCII tables).
+One problem with the binary format on the other hand is that it isn't portable 
(different CPUs/compilers) have different standards for translating the zeros 
and ones.
+But since ASCII characters are defined on a byte and are well recognized, they 
are better for portability on those various systems.
+Gnuastro's plain text table format described below is much more portable and 
easier to read/write/interpret by humans manually.
+
+Generally, as the name implies, this format is useful for when your table 
mainly contains ASCII columns (for example file names, or descriptions).
+They can be useful when you need to include columns with structured ASCII 
information along with other extensions in one FITS file.
+In such cases, you can also consider header keywords (see @ref{Fits}).
 
 @cindex Binary table, FITS
 @item FITS binary tables
-The FITS binary table is the FITS standard's solution to the issues
-discussed with keeping numbers in ASCII format as described under the FITS
-ASCII table title above. Only columns defined as a string type (a string of
-ASCII characters) are readable in a text editor. The portability problem
-with binary formats discussed above is mostly solved thanks to the
-portability of CFITSIO (see @ref{CFITSIO}) and the very long history of the
-FITS format which has been widely used since the 1970s.
-
-In the case of most numbers, storing them in binary format is more memory
-efficient than ASCII format. For example, to store @code{-25.72034} in
-ASCII format, you need 9 bytes/characters. But if you keep this same number
-(to the approximate precision possible) as a 4-byte (32-bit) floating point
-number, you can keep/transmit it with less than half the amount of
-memory. When catalogs contain thousands/millions of rows in tens/hundreds
-of columns, this can lead to significant improvements in memory/band-width
-usage. Moreover, since the CPU does its operations in the binary formats,
-reading the table in and writing it out is also much faster than an ASCII
-table.
+The FITS binary table is the FITS standard's solution to the issues discussed 
with keeping numbers in ASCII format as described under the FITS ASCII table 
title above.
+Only columns defined as a string type (a string of ASCII characters) are 
readable in a text editor.
+The portability problem with binary formats discussed above is mostly solved 
thanks to the portability of CFITSIO (see @ref{CFITSIO}) and the very long 
history of the FITS format which has been widely used since the 1970s.
+
+In the case of most numbers, storing them in binary format is more memory 
efficient than ASCII format.
+For example, to store @code{-25.72034} in ASCII format, you need 9 
bytes/characters.
+But if you keep this same number (to the approximate precision possible) as a 
4-byte (32-bit) floating point number, you can keep/transmit it with less than 
half the amount of memory.
+When catalogs contain thousands/millions of rows in tens/hundreds of columns, 
this can lead to significant improvements in memory/band-width usage.
+Moreover, since the CPU does its operations in the binary formats, reading the 
table in and writing it out is also much faster than an ASCII table.
+
+When you are dealing with integer numbers, the compression ratio can be even 
better, for example if you know all of the values in a column are positive and 
less than @code{255}, you can use the @code{unsigned char} type which only 
takes one byte! If they are between @code{-128} and @code{127}, then you can 
use the (signed) @code{char} type.
+So if you are thoughtful about the limits of your integer columns, you can 
greatly reduce the size of your file and also the speed at which it is 
read/written.
+This can be very useful when sharing your results with collaborators or 
publishing them.
+To decrease the file size even more you can name your output as ending in 
@file{.fits.gz} so it is also compressed after creation.
+Just note that compression/decompressing is CPU intensive and can slow down 
the writing/reading of the file.
 
-When you are dealing with integer numbers, the compression ratio can be
-even better, for example if you know all of the values in a column are
-positive and less than @code{255}, you can use the @code{unsigned char}
-type which only takes one byte! If they are between @code{-128} and
-@code{127}, then you can use the (signed) @code{char} type. So if you are
-thoughtful about the limits of your integer columns, you can greatly reduce
-the size of your file and also the speed at which it is read/written. This
-can be very useful when sharing your results with collaborators or
-publishing them. To decrease the file size even more you can name your
-output as ending in @file{.fits.gz} so it is also compressed after
-creation. Just note that compression/decompressing is CPU intensive and can
-slow down the writing/reading of the file.
-
-Fortunately the FITS Binary table format also accepts ASCII strings as
-column types (along with the various numerical types). So your dataset can
-also contain non-numerical columns.
+Fortunately the FITS Binary table format also accepts ASCII strings as column 
types (along with the various numerical types).
+So your dataset can also contain non-numerical columns.
 
 @end table
 
@@ -9528,70 +7115,45 @@ also contain non-numerical columns.
 @node Gnuastro text table format, Selecting table columns, Recognized table 
formats, Tables
 @subsection Gnuastro text table format
 
-Plain text files are the most generic, portable, and easiest way to
-(manually) create, (visually) inspect, or (manually) edit a table. In this
-format, the ending of a row is defined by the new-line character (a line on
-a text editor). So when you view it on a text editor, every row will occupy
-one line. The delimiters (or characters separating the columns) are white
-space characters (space, horizontal tab, vertical tab) and a comma
-(@key{,}). The only further requirement is that all rows/lines must have
-the same number of columns.
-
-The columns don't have to be exactly under each other and the rows can be
-arbitrarily long with different lengths. For example the following contents
-in a file would be interpreted as a table with 4 columns and 2 rows, with
-each element interpreted as a @code{double} type (see @ref{Numeric data
-types}).
+Plain text files are the most generic, portable, and easiest way to (manually) 
create, (visually) inspect, or (manually) edit a table.
+In this format, the ending of a row is defined by the new-line character (a 
line on a text editor).
+So when you view it on a text editor, every row will occupy one line.
+The delimiters (or characters separating the columns) are white space 
characters (space, horizontal tab, vertical tab) and a comma (@key{,}).
+The only further requirement is that all rows/lines must have the same number 
of columns.
+
+The columns don't have to be exactly under each other and the rows can be 
arbitrarily long with different lengths.
+For example the following contents in a file would be interpreted as a table 
with 4 columns and 2 rows, with each element interpreted as a @code{double} 
type (see @ref{Numeric data types}).
 
 @example
 1     2.234948   128   39.8923e8
 2 , 4.454        792     72.98348e7
 @end example
 
-However, the example above has no other information about the columns (it
-is just raw data, with no meta-data). To use this table, you have to
-remember what the numbers in each column represent. Also, when you want to
-select columns, you have to count their position within the table. This can
-become frustrating and prone to bad errors (getting the columns wrong)
-especially as the number of columns increase. It is also bad for sending to
-a colleague, because they will find it hard to remember/use the columns
-properly.
-
-To solve these problems in Gnuastro's programs/libraries you aren't limited
-to using the column's number, see @ref{Selecting table columns}. If the
-columns have names, units, or comments you can also select your columns
-based on searches/matches in these fields, for example see @ref{Table}.
-Also, in this manner, you can't guide the program reading the table on how
-to read the numbers. As an example, the first and third columns above can
-be read as integer types: the first column might be an ID and the third can
-be the number of pixels an object occupies in an image. So there is no need
-to read these to columns as a @code{double} type (which takes more memory,
-and is slower).
-
-In the bare-minimum example above, you also can't use strings of
-characters, for example the names of filters, or some other identifier that
-includes non-numerical characters. In the absence of any information, only
-numbers can be read robustly. Assuming we read columns with non-numerical
-characters as string, there would still be the problem that the strings
-might contain space (or any delimiter) character for some rows. So, each
-`word' in the string will be interpreted as a column and the program will
-abort with an error that the rows don't have the same number of columns.
-
-To correct for these limitations, Gnuastro defines the following convention
-for storing the table meta-data along with the raw data in one plain text
-file. The format is primarily designed for ease of reading/writing by
-eye/fingers, but is also structured enough to be read by a program.
-
-When the first non-white character in a line is @key{#}, or there are no
-non-white characters in it, then the line will not be considered as a row
-of data in the table (this is a pretty standard convention in many
-programs, and higher level languages). In the former case, the line is
-interpreted as a @emph{comment}. If the comment line starts with `@code{#
-Column N:}', then it is assumed to contain information about column
-@code{N} (a number, counting from 1). Comment lines that don't start with
-this pattern are ignored and you can use them to include any further
-information you want to store with the table in the text file. A column
-information comment is assumed to have the following format:
+However, the example above has no other information about the columns (it is 
just raw data, with no meta-data).
+To use this table, you have to remember what the numbers in each column 
represent.
+Also, when you want to select columns, you have to count their position within 
the table.
+This can become frustrating and prone to bad errors (getting the columns 
wrong) especially as the number of columns increase.
+It is also bad for sending to a colleague, because they will find it hard to 
remember/use the columns properly.
+
+To solve these problems in Gnuastro's programs/libraries you aren't limited to 
using the column's number, see @ref{Selecting table columns}.
+If the columns have names, units, or comments you can also select your columns 
based on searches/matches in these fields, for example see @ref{Table}.
+Also, in this manner, you can't guide the program reading the table on how to 
read the numbers.
+As an example, the first and third columns above can be read as integer types: 
the first column might be an ID and the third can be the number of pixels an 
object occupies in an image.
+So there is no need to read these to columns as a @code{double} type (which 
takes more memory, and is slower).
+
+In the bare-minimum example above, you also can't use strings of characters, 
for example the names of filters, or some other identifier that includes 
non-numerical characters.
+In the absence of any information, only numbers can be read robustly.
+Assuming we read columns with non-numerical characters as string, there would 
still be the problem that the strings might contain space (or any delimiter) 
character for some rows.
+So, each `word' in the string will be interpreted as a column and the program 
will abort with an error that the rows don't have the same number of columns.
+
+To correct for these limitations, Gnuastro defines the following convention 
for storing the table meta-data along with the raw data in one plain text file.
+The format is primarily designed for ease of reading/writing by eye/fingers, 
but is also structured enough to be read by a program.
+
+When the first non-white character in a line is @key{#}, or there are no 
non-white characters in it, then the line will not be considered as a row of 
data in the table (this is a pretty standard convention in many programs, and 
higher level languages).
+In the former case, the line is interpreted as a @emph{comment}.
+If the comment line starts with `@code{# Column N:}', then it is assumed to 
contain information about column @code{N} (a number, counting from 1).
+Comment lines that don't start with this pattern are ignored and you can use 
them to include any further information you want to store with the table in the 
text file.
+A column information comment is assumed to have the following format:
 
 @example
 # Column N: NAME [UNIT, TYPE, BLANK] COMMENT
@@ -9599,50 +7161,32 @@ information comment is assumed to have the following 
format:
 
 @cindex NaN
 @noindent
-Any sequence of characters between `@key{:}' and `@key{[}' will be
-interpreted as the column name (so it can contain anything except the
-`@key{[}' character). Anything between the `@key{]}' and the end of the
-line is defined as a comment. Within the brackets, anything before the
-first `@key{,}' is the units (physical units, for example km/s, or erg/s),
-anything before the second `@key{,}' is the short type identifier (see
-below, and @ref{Numeric data types}). Finally (still within the brackets),
-any non-white characters after the second `@key{,}' are interpreted as the
-blank value for that column (see @ref{Blank pixels}). Note that blank
-values will be stored in the same type as the column, not as a
-string@footnote{For floating point types, the @code{nan}, or @code{inf}
-strings (both not case-sensitive) refer to IEEE NaN (not a number) and
-infinity values respectively and will be stored as a floating point, so
-they are acceptable.}.
-
-When a formatting problem occurs (for example you have specified the wrong
-type code, see below), or the the column was already given meta-data in a
-previous comment, or the column number is larger than the actual number of
-columns in the table (the non-commented or empty lines), then the comment
-information line will be ignored.
-
-When a comment information line can be used, the leading and trailing white
-space characters will be stripped from all of the elements. For example in
-this line:
+Any sequence of characters between `@key{:}' and `@key{[}' will be interpreted 
as the column name (so it can contain anything except the `@key{[}' character).
+Anything between the `@key{]}' and the end of the line is defined as a comment.
+Within the brackets, anything before the first `@key{,}' is the units 
(physical units, for example km/s, or erg/s), anything before the second 
`@key{,}' is the short type identifier (see below, and @ref{Numeric data 
types}).
+Finally (still within the brackets), any non-white characters after the second 
`@key{,}' are interpreted as the blank value for that column (see @ref{Blank 
pixels}).
+Note that blank values will be stored in the same type as the column, not as a 
string@footnote{For floating point types, the @code{nan}, or @code{inf} strings 
(both not case-sensitive) refer to IEEE NaN (not a number) and infinity values 
respectively and will be stored as a floating point, so they are acceptable.}.
+
+When a formatting problem occurs (for example you have specified the wrong 
type code, see below), or the column was already given meta-data in a previous 
comment, or the column number is larger than the actual number of columns in 
the table (the non-commented or empty lines), then the comment information line 
will be ignored.
+
+When a comment information line can be used, the leading and trailing white 
space characters will be stripped from all of the elements.
+For example in this line:
 
 @example
 # Column 5:  column name   [km/s,    f32,-99] Redshift as speed
 @end example
 
-The @code{NAME} field will be `@code{column name}' and the @code{TYPE}
-field will be `@code{f32}'. Note how all the white space characters before
-and after strings are not used, but those in the middle remained. Also,
-white space characters aren't mandatory. Hence, in the example above, the
-@code{BLANK} field will be given the value of `@code{-99}'.
+The @code{NAME} field will be `@code{column name}' and the @code{TYPE} field 
will be `@code{f32}'.
+Note how all the white space characters before and after strings are not used, 
but those in the middle remained.
+Also, white space characters aren't mandatory.
+Hence, in the example above, the @code{BLANK} field will be given the value of 
`@code{-99}'.
 
-Except for the column number (@code{N}), the rest of the fields are
-optional. Also, the column information comments don't have to be in
-order. In other words, the information for column @mymath{N+m}
-(@mymath{m>0}) can be given in a line before column @mymath{N}. Also, you
-don't have to specify information for all columns. Those columns that don't
-have this information will be interpreted with the default settings (like
-the case above: values are double precision floating point, and the column
-has no name, unit, or comment). So these lines are all acceptable for any
-table (the first one, with nothing but the column number is redundant):
+Except for the column number (@code{N}), the rest of the fields are optional.
+Also, the column information comments don't have to be in order.
+In other words, the information for column @mymath{N+m} (@mymath{m>0}) can be 
given in a line before column @mymath{N}.
+Also, you don't have to specify information for all columns.
+Those columns that don't have this information will be interpreted with the 
default settings (like the case above: values are double precision floating 
point, and the column has no name, unit, or comment).
+So these lines are all acceptable for any table (the first one, with nothing 
but the column number is redundant):
 
 @example
 # Column 5:
@@ -9651,39 +7195,27 @@ table (the first one, with nothing but the column 
number is redundant):
 @end example
 
 @noindent
-The data type of the column should be specified with one of the following
-values:
+The data type of the column should be specified with one of the following 
values:
 
 @itemize
 @item
 For a numeric column, you can use any of the numeric types (and their
 recognized identifiers) described in @ref{Numeric data types}.
 @item
-`@code{strN}': for strings. The @code{N} value identifies the length of the
-string (how many characters it has). The start of the string on each row is
-the first non-delimiter character of the column that has the string
-type. The next @code{N} characters will be interpreted as a string and all
-leading and trailing white space will be removed.
-
-If the next column's characters, are closer than @code{N} characters to the
-start of the string column in that line/row, they will be considered part
-of the string column. If there is a new-line character before the ending of
-the space given to the string column (in other words, the string column is
-the last column), then reading of the string will stop, even if the
-@code{N} characters are not complete yet. See @file{tests/table/table.txt}
-for one example. Therefore, the only time you have to pay attention to the
-positioning and spaces given to the string column is when it is not the
-last column in the table.
-
-The only limitation in this format is that trailing and leading white space
-characters will be removed from the columns that are read. In most cases,
-this is the desired behavior, but if trailing and leading white-spaces are
-critically important to your analysis, define your own starting and ending
-characters and remove them after the table has been read. For example in
-the sample table below, the two `@key{|}' characters (which are arbitrary)
-will remain in the value of the second column and you can remove them
-manually later. If only one of the leading or trailing white spaces is
-important for your work, you can only use one of the `@key{|}'s.
+`@code{strN}': for strings.
+The @code{N} value identifies the length of the string (how many characters it 
has).
+The start of the string on each row is the first non-delimiter character of 
the column that has the string type.
+The next @code{N} characters will be interpreted as a string and all leading 
and trailing white space will be removed.
+
+If the next column's characters, are closer than @code{N} characters to the 
start of the string column in that line/row, they will be considered part of 
the string column.
+If there is a new-line character before the ending of the space given to the 
string column (in other words, the string column is the last column), then 
reading of the string will stop, even if the @code{N} characters are not 
complete yet.
+See @file{tests/table/table.txt} for one example.
+Therefore, the only time you have to pay attention to the positioning and 
spaces given to the string column is when it is not the last column in the 
table.
+
+The only limitation in this format is that trailing and leading white space 
characters will be removed from the columns that are read.
+In most cases, this is the desired behavior, but if trailing and leading 
white-spaces are critically important to your analysis, define your own 
starting and ending characters and remove them after the table has been read.
+For example in the sample table below, the two `@key{|}' characters (which are 
arbitrary) will remain in the value of the second column and you can remove 
them manually later.
+If only one of the leading or trailing white spaces is important for your 
work, you can only use one of the `@key{|}'s.
 
 @example
 # Column 1: ID [label, u8]
@@ -9694,88 +7226,54 @@ important for your work, you can only use one of the 
`@key{|}'s.
 
 @end itemize
 
-Note that the FITS binary table standard does not define the @code{unsigned
-int} and @code{unsigned long} types, so if you want to convert your tables
-to FITS binary tables, use other types. Also, note that in the FITS ASCII
-table, there is only one integer type (@code{long}). So if you convert a
-Gnuastro plain text table to a FITS ASCII table with the @ref{Table}
-program, the type information for integers will be lost. Conversely if
-integer types are important for you, you have to manually set them when
-reading a FITS ASCII table (for example with the Table program when
-reading/converting into a file, or with the @file{gnuastro/table.h} library
-functions when reading into memory).
+Note that the FITS binary table standard does not define the @code{unsigned 
int} and @code{unsigned long} types, so if you want to convert your tables to 
FITS binary tables, use other types.
+Also, note that in the FITS ASCII table, there is only one integer type 
(@code{long}).
+So if you convert a Gnuastro plain text table to a FITS ASCII table with the 
@ref{Table} program, the type information for integers will be lost.
+Conversely if integer types are important for you, you have to manually set 
them when reading a FITS ASCII table (for example with the Table program when 
reading/converting into a file, or with the @file{gnuastro/table.h} library 
functions when reading into memory).
 
 
 @node Selecting table columns,  , Gnuastro text table format, Tables
 @subsection Selecting table columns
 
-At the lowest level, the only defining aspect of a column in a table is its
-number, or position. But selecting columns purely by number is not very
-convenient and, especially when the tables are large it can be very
-frustrating and prone to errors. Hence, table file formats (for example see
-@ref{Recognized table formats}) have ways to store additional information
-about the columns (meta-data). Some of the most common pieces of
-information about each column are its @emph{name}, the @emph{units} of data
-in the it, and a @emph{comment} for longer/informal description of the
-column's data.
+At the lowest level, the only defining aspect of a column in a table is its 
number, or position.
+But selecting columns purely by number is not very convenient and, especially 
when the tables are large it can be very frustrating and prone to errors.
+Hence, table file formats (for example see @ref{Recognized table formats}) 
have ways to store additional information about the columns (meta-data).
+Some of the most common pieces of information about each column are its 
@emph{name}, the @emph{units} of data in the it, and a @emph{comment} for 
longer/informal description of the column's data.
 
-To facilitate research with Gnuastro, you can select columns by matching,
-or searching in these three fields, besides the low-level column number. To
-view the full list of information on the columns in the table, you can use
-the Table program (see @ref{Table}) with the command below (replace
-@file{table-file} with the filename of your table, if its FITS, you might
-also need to specify the HDU/extension which contains the table):
+To facilitate research with Gnuastro, you can select columns by matching, or 
searching in these three fields, besides the low-level column number.
+To view the full list of information on the columns in the table, you can use 
the Table program (see @ref{Table}) with the command below (replace 
@file{table-file} with the filename of your table, if its FITS, you might also 
need to specify the HDU/extension which contains the table):
 
 @example
 $ asttable --information table-file
 @end example
 
-Gnuastro's programs need the columns for different purposes, for example in
-Crop, you specify the columns containing the central coordinates of the
-crop centers with the @option{--coordcol} option (see @ref{Crop
-options}). On the other hand, in MakeProfiles, to specify the column
-containing the profile position angles, you must use the @option{--pcol}
-option (see @ref{MakeProfiles catalog}). Thus, there can be no unified
-common option name to select columns for all programs (different columns
-have different purposes). However, when the program expects a column for a
-specific context, the option names end in the @option{col} suffix like the
-examples above. These options accept values in integer (column number), or
-string (metadata match/search) format.
-
-If the value can be parsed as a positive integer, it will be seen as the
-low-level column number. Note that column counting starts from 1, so if you
-ask for column 0, the respective program will abort with an error. When the
-value can't be interpreted as an a integer number, it will be seen as a
-string of characters which will be used to match/search in the table's
-meta-data. The meta-data field which the value will be compared with can be
-selected through the @option{--searchin} option, see @ref{Input output
-options}. @option{--searchin} can take three values: @code{name},
-@code{unit}, @code{comment}. The matching will be done following this
-convention:
+Gnuastro's programs need the columns for different purposes, for example in 
Crop, you specify the columns containing the central coordinates of the crop 
centers with the @option{--coordcol} option (see @ref{Crop options}).
+On the other hand, in MakeProfiles, to specify the column containing the 
profile position angles, you must use the @option{--pcol} option (see 
@ref{MakeProfiles catalog}).
+Thus, there can be no unified common option name to select columns for all 
programs (different columns have different purposes).
+However, when the program expects a column for a specific context, the option 
names end in the @option{col} suffix like the examples above.
+These options accept values in integer (column number), or string (metadata 
match/search) format.
+
+If the value can be parsed as a positive integer, it will be seen as the 
low-level column number.
+Note that column counting starts from 1, so if you ask for column 0, the 
respective program will abort with an error.
+When the value can't be interpreted as an a integer number, it will be seen as 
a string of characters which will be used to match/search in the table's 
meta-data.
+The meta-data field which the value will be compared with can be selected 
through the @option{--searchin} option, see @ref{Input output options}.
+@option{--searchin} can take three values: @code{name}, @code{unit}, 
@code{comment}.
+The matching will be done following this convention:
 
 @itemize
 @item
-If the value is enclosed in two slashes (for example @command{-x/RA_/}, or
-@option{--coordcol=/RA_/}, see @ref{Crop options}), then it is assumed to
-be a regular expression with the same convention as GNU AWK. GNU AWK has a
-very well written
-@url{https://www.gnu.org/software/gawk/manual/html_node/Regexp.html,
-chapter} describing regular expressions, so we we will not continue
-discussing them here. Regular expressions are a very powerful tool in
-matching text and useful in many contexts. We thus strongly encourage
-reviewing this chapter for greatly improving the quality of your work in
-many cases, not just for searching column meta-data in Gnuastro.
+If the value is enclosed in two slashes (for example @command{-x/RA_/}, or 
@option{--coordcol=/RA_/}, see @ref{Crop options}), then it is assumed to be a 
regular expression with the same convention as GNU AWK.
+GNU AWK has a very well written 
@url{https://www.gnu.org/software/gawk/manual/html_node/Regexp.html, chapter} 
describing regular expressions, so we we will not continue discussing them here.
+Regular expressions are a very powerful tool in matching text and useful in 
many contexts.
+We thus strongly encourage reviewing this chapter for greatly improving the 
quality of your work in many cases, not just for searching column meta-data in 
Gnuastro.
 
 @item
-When the string isn't enclosed between `@key{/}'s, any column that exactly
-matches the given value in the given field will be selected.
+When the string isn't enclosed between `@key{/}'s, any column that exactly 
matches the given value in the given field will be selected.
 @end itemize
 
-Note that in both cases, you can ignore the case of alphabetic characters
-with the @option{--ignorecase} option, see @ref{Input output options}. Also, in
-both cases, multiple columns may be selected with one call to this
-function. In this case, the order of the selected columns (with one call)
-will be the same order as they appear in the table.
+Note that in both cases, you can ignore the case of alphabetic characters with 
the @option{--ignorecase} option, see @ref{Input output options}.
+Also, in both cases, multiple columns may be selected with one call to this 
function.
+In this case, the order of the selected columns (with one call) will be the 
same order as they appear in the table.
 
 
 
@@ -9784,53 +7282,30 @@ will be the same order as they appear in the table.
 @node Tessellation, Automatic output, Tables, Common program behavior
 @section Tessellation
 
-It is sometimes necessary to classify the elements in a dataset (for
-example pixels in an image) into a grid of individual, non-overlapping
-tiles. For example when background sky gradients are present in an image,
-you can define a tile grid over the image. When the tile sizes are set
-properly, the background's variation over each tile will be negligible,
-allowing you to measure (and subtract) it. In other cases (for example
-spatial domain convolution in Gnuastro, see @ref{Convolve}), it might
-simply be for speed of processing: each tile can be processed independently
-on a separate CPU thread. In the arts and mathematics, this process is
-formally known as @url{https://en.wikipedia.org/wiki/Tessellation,
-tessellation}.
-
-The size of the regular tiles (in units of data-elements, or pixels in an
-image) can be defined with the @option{--tilesize} option. It takes
-multiple numbers (separated by a comma) which will be the length along the
-respective dimension (in FORTRAN/FITS dimension order). Divisions are also
-acceptable, but must result in an integer. For example
-@option{--tilesize=30,40} can be used for an image (a 2D dataset). The
-regular tile size along the first FITS axis (horizontal when viewed in SAO
-ds9) will be 30 pixels and along the second it will be 40 pixels. Ideally,
-@option{--tilesize} should be selected such that all tiles in the image
-have exactly the same size. In other words, that the dataset length in each
-dimension is divisible by the tile size in that dimension.
-
-However, this is not always possible: the dataset can be any size and every
-pixel in it is valuable. In such cases, Gnuastro will look at the
-significance of the remainder length, if it is not significant (for example
-one or two pixels), then it will just increase the size of the first tile
-in the respective dimension and allow the rest of the tiles to have the
-required size. When the remainder is significant (for example one pixel
-less than the size along that dimension), the remainder will be added to
-one regular tile's size and the large tile will be cut in half and put in
-the two ends of the grid/tessellation. In this way, all the tiles in the
-central regions of the dataset will have the regular tile sizes and the
-tiles on the edge will be slightly larger/smaller depending on the
-remainder significance. The fraction which defines the remainder
-significance along all dimensions can be set through
-@option{--remainderfrac}.
-
-The best tile size is directly related to the spatial properties of the
-property you want to study (for example, gradient on the image). In
-practice we assume that the gradient is not present over each tile. So if
-there is a strong gradient (for example in long wavelength ground based
-images) or the image is of a crowded area where there isn't too much blank
-area, you have to choose a smaller tile size. A larger mesh will give more
-pixels and and so the scatter in the results will be less (better
-statistics).
+It is sometimes necessary to classify the elements in a dataset (for example 
pixels in an image) into a grid of individual, non-overlapping tiles.
+For example when background sky gradients are present in an image, you can 
define a tile grid over the image.
+When the tile sizes are set properly, the background's variation over each 
tile will be negligible, allowing you to measure (and subtract) it.
+In other cases (for example spatial domain convolution in Gnuastro, see 
@ref{Convolve}), it might simply be for speed of processing: each tile can be 
processed independently on a separate CPU thread.
+In the arts and mathematics, this process is formally known as 
@url{https://en.wikipedia.org/wiki/Tessellation, tessellation}.
+
+The size of the regular tiles (in units of data-elements, or pixels in an 
image) can be defined with the @option{--tilesize} option.
+It takes multiple numbers (separated by a comma) which will be the length 
along the respective dimension (in FORTRAN/FITS dimension order).
+Divisions are also acceptable, but must result in an integer.
+For example @option{--tilesize=30,40} can be used for an image (a 2D dataset).
+The regular tile size along the first FITS axis (horizontal when viewed in SAO 
ds9) will be 30 pixels and along the second it will be 40 pixels.
+Ideally, @option{--tilesize} should be selected such that all tiles in the 
image have exactly the same size.
+In other words, that the dataset length in each dimension is divisible by the 
tile size in that dimension.
+
+However, this is not always possible: the dataset can be any size and every 
pixel in it is valuable.
+In such cases, Gnuastro will look at the significance of the remainder length, 
if it is not significant (for example one or two pixels), then it will just 
increase the size of the first tile in the respective dimension and allow the 
rest of the tiles to have the required size.
+When the remainder is significant (for example one pixel less than the size 
along that dimension), the remainder will be added to one regular tile's size 
and the large tile will be cut in half and put in the two ends of the 
grid/tessellation.
+In this way, all the tiles in the central regions of the dataset will have the 
regular tile sizes and the tiles on the edge will be slightly larger/smaller 
depending on the remainder significance.
+The fraction which defines the remainder significance along all dimensions can 
be set through @option{--remainderfrac}.
+
+The best tile size is directly related to the spatial properties of the 
property you want to study (for example, gradient on the image).
+In practice we assume that the gradient is not present over each tile.
+So if there is a strong gradient (for example in long wavelength ground based 
images) or the image is of a crowded area where there isn't too much blank 
area, you have to choose a smaller tile size.
+A larger mesh will give more pixels and and so the scatter in the results will 
be less (better statistics).
 
 @cindex CCD
 @cindex Amplifier
@@ -9838,47 +7313,31 @@ statistics).
 @cindex Subaru Telescope
 @cindex Hyper Suprime-Cam
 @cindex Hubble Space Telescope (HST)
-For raw image processing, a single tessellation/grid is not sufficient. Raw
-images are the unprocessed outputs of the camera detectors. Modern
-detectors usually have multiple readout channels each with its own
-amplifier. For example the Hubble Space Telescope Advanced Camera for
-Surveys (ACS) has four amplifiers over its full detector area dividing the
-square field of view to four smaller squares. Ground based image detectors
-are not exempt, for example each CCD of Subaru Telescope's Hyper
-Suprime-Cam camera (which has 104 CCDs) has four amplifiers, but they have
-the same height of the CCD and divide the width by four parts.
+For raw image processing, a single tessellation/grid is not sufficient.
+Raw images are the unprocessed outputs of the camera detectors.
+Modern detectors usually have multiple readout channels each with its own 
amplifier.
+For example the Hubble Space Telescope Advanced Camera for Surveys (ACS) has 
four amplifiers over its full detector area dividing the square field of view 
to four smaller squares.
+Ground based image detectors are not exempt, for example each CCD of Subaru 
Telescope's Hyper Suprime-Cam camera (which has 104 CCDs) has four amplifiers, 
but they have the same height of the CCD and divide the width by four parts.
 
 @cindex Channel
-The bias current on each amplifier is different, and initial bias
-subtraction is not perfect. So even after subtracting the measured bias
-current, you can usually still identify the boundaries of different
-amplifiers by eye. See Figure 11(a) in Akhlaghi and Ichikawa (2015) for an
-example. This results in the final reduced data to have non-uniform
-amplifier-shaped regions with higher or lower background flux values. Such
-systematic biases will then propagate to all subsequent measurements we do
-on the data (for example photometry and subsequent stellar mass and star
-formation rate measurements in the case of galaxies).
-
-Therefore an accurate analysis requires a two layer tessellation: the top
-layer contains larger tiles, each covering one amplifier channel. For
-clarity we'll call these larger tiles ``channels''. The number of channels
-along each dimension is defined through the @option{--numchannels}. Each
-channel is then covered by its own individual smaller tessellation (with
-tile sizes determined by the @option{--tilesize} option). This will allow
-independent analysis of two adjacent pixels from different channels if
-necessary. If the image is processed or the detector only has one
-amplifier, you can set the number of channels in both dimension to 1.
-
-The final tessellation can be inspected on the image with the
-@option{--checktiles} option that is available to all programs which use
-tessellation for localized operations. When this option is called, a FITS
-file with a @file{_tiled.fits} suffix will be created along with the
-outputs, see @ref{Automatic output}. Each pixel in this image has the
-number of the tile that covers it. If the number of channels in any
-dimension are larger than unity, you will notice that the tile IDs are
-defined such that the first channels is covered first, then the second and
-so on. For the full list of processing-related common options (including
-tessellation options), please see @ref{Processing options}.
+The bias current on each amplifier is different, and initial bias subtraction 
is not perfect.
+So even after subtracting the measured bias current, you can usually still 
identify the boundaries of different amplifiers by eye.
+See Figure 11(a) in Akhlaghi and Ichikawa (2015) for an example.
+This results in the final reduced data to have non-uniform amplifier-shaped 
regions with higher or lower background flux values.
+Such systematic biases will then propagate to all subsequent measurements we 
do on the data (for example photometry and subsequent stellar mass and star 
formation rate measurements in the case of galaxies).
+
+Therefore an accurate analysis requires a two layer tessellation: the top 
layer contains larger tiles, each covering one amplifier channel.
+For clarity we'll call these larger tiles ``channels''.
+The number of channels along each dimension is defined through the 
@option{--numchannels}.
+Each channel is then covered by its own individual smaller tessellation (with 
tile sizes determined by the @option{--tilesize} option).
+This will allow independent analysis of two adjacent pixels from different 
channels if necessary.
+If the image is processed or the detector only has one amplifier, you can set 
the number of channels in both dimension to 1.
+
+The final tessellation can be inspected on the image with the 
@option{--checktiles} option that is available to all programs which use 
tessellation for localized operations.
+When this option is called, a FITS file with a @file{_tiled.fits} suffix will 
be created along with the outputs, see @ref{Automatic output}.
+Each pixel in this image has the number of the tile that covers it.
+If the number of channels in any dimension are larger than unity, you will 
notice that the tile IDs are defined such that the first channels is covered 
first, then the second and so on.
+For the full list of processing-related common options (including tessellation 
options), please see @ref{Processing options}.
 
 
 
@@ -9891,38 +7350,21 @@ tessellation options), please see @ref{Processing 
options}.
 @cindex Automatic output file names
 @cindex Output file names, automatic
 @cindex Setting output file names automatically
-All the programs in Gnuastro are designed such that specifying an output
-file or directory (based on the program context) is optional. When no
-output name is explicitly given (with @option{--output}, see @ref{Input
-output options}), the programs will automatically set an output name based
-on the input name(s) and what the program does. For example when you are
-using ConvertType to save FITS image named @file{dataset.fits} to a JPEG
-image and don't specify a name for it, the JPEG output file will be name
-@file{dataset.jpg}. When the input is from the standard input (for example
-a pipe, see @ref{Standard input}), and @option{--output} isn't given, the
-output name will be the program's name (for example
-@file{converttype.jpg}).
+All the programs in Gnuastro are designed such that specifying an output file 
or directory (based on the program context) is optional.
+When no output name is explicitly given (with @option{--output}, see 
@ref{Input output options}), the programs will automatically set an output name 
based on the input name(s) and what the program does.
+For example when you are using ConvertType to save FITS image named 
@file{dataset.fits} to a JPEG image and don't specify a name for it, the JPEG 
output file will be name @file{dataset.jpg}.
+When the input is from the standard input (for example a pipe, see 
@ref{Standard input}), and @option{--output} isn't given, the output name will 
be the program's name (for example @file{converttype.jpg}).
 
 @vindex --keepinputdir
-Another very important part of the automatic output generation is that all
-the directory information of the input file name is stripped off of
-it. This feature can be disabled with the @option{--keepinputdir} option,
-see @ref{Input output options}. It is the default because astronomical data
-are usually very large and organized specially with special file names. In
-some cases, the user might not have write permissions in those
-directories@footnote{In fact, even if the data is stored on your own
-computer, it is advised to only grant write permissions to the super user
-or root. This way, you won't accidentally delete or modify your valuable
-data!}.
-
-Let's assume that we are working on a report and want to process the
-FITS images from two projects (ABC and DEF), which are stored in the
-sub-directories named @file{ABCproject/} and @file{DEFproject/} of our
-top data directory (@file{/mnt/data}). The following shell commands
-show how one image from the former is first converted to a JPEG image
-through ConvertType and then the objects from an image in the latter
-project are detected using NoiseChisel. The text after the @command{#}
-sign are comments (not typed!).
+Another very important part of the automatic output generation is that all the 
directory information of the input file name is stripped off of it.
+This feature can be disabled with the @option{--keepinputdir} option, see 
@ref{Input output options}.
+It is the default because astronomical data are usually very large and 
organized specially with special file names.
+In some cases, the user might not have write permissions in those 
directories@footnote{In fact, even if the data is stored on your own computer, 
it is advised to only grant write permissions to the super user or root.
+This way, you won't accidentally delete or modify your valuable data!}.
+
+Let's assume that we are working on a report and want to process the FITS 
images from two projects (ABC and DEF), which are stored in the sub-directories 
named @file{ABCproject/} and @file{DEFproject/} of our top data directory 
(@file{/mnt/data}).
+The following shell commands show how one image from the former is first 
converted to a JPEG image through ConvertType and then the objects from an 
image in the latter project are detected using NoiseChisel.
+The text after the @command{#} sign are comments (not typed!).
 
 @example
 $ pwd                                               # Current location
@@ -9951,119 +7393,81 @@ ABC01.jpg ABC02.jpg DEF01_detected.fits
 @cindex FITS
 @cindex Output FITS headers
 @cindex CFITSIO version on outputs
-The output of many of Gnuastro's programs are (or can be) FITS files. The
-FITS format has many useful features for storing scientific datasets
-(cubes, images and tables) along with a robust features for
-archivability. For more on this standard, please see @ref{Fits}.
-
-As a community convention described in @ref{Fits}, the first extension of
-all FITS files produced by Gnuastro's programs only contains the meta-data
-that is intended for the file's extension(s). For a Gnuastro program, this
-generic meta-data (that is stored as FITS keyword records) is its
-configuration when it produced this dataset: file name(s) of input(s) and
-option names, values and comments. Note that when the configuration is too
-trivial (only input filename, for example the program @ref{Table}) no
-meta-data is written in this extension.
-
-FITS keywords have the following limitations in regards to generic option
-names and values which are described below:
+The output of many of Gnuastro's programs are (or can be) FITS files.
+The FITS format has many useful features for storing scientific datasets 
(cubes, images and tables) along with a robust features for archivability.
+For more on this standard, please see @ref{Fits}.
+
+As a community convention described in @ref{Fits}, the first extension of all 
FITS files produced by Gnuastro's programs only contains the meta-data that is 
intended for the file's extension(s).
+For a Gnuastro program, this generic meta-data (that is stored as FITS keyword 
records) is its configuration when it produced this dataset: file name(s) of 
input(s) and option names, values and comments.
+Note that when the configuration is too trivial (only input filename, for 
example the program @ref{Table}) no meta-data is written in this extension.
+
+FITS keywords have the following limitations in regards to generic option 
names and values which are described below:
 
 @itemize
 @item
-If a keyword (option name) is longer than 8 characters, the first word in
-the record (80 character line) is @code{HIERARCH} which is followed by the
-keyword name.
+If a keyword (option name) is longer than 8 characters, the first word in the 
record (80 character line) is @code{HIERARCH} which is followed by the keyword 
name.
 
 @item
-Values can be at most 75 characters, but for strings, this changes to 73
-(because of the two extra @key{'} characters that are necessary). However,
-if the value is a file name, containing slash (@key{/}) characters to
-separate directories, Gnuastro will break the value into multiple keywords.
+Values can be at most 75 characters, but for strings, this changes to 73 
(because of the two extra @key{'} characters that are necessary).
+However, if the value is a file name, containing slash (@key{/}) characters to 
separate directories, Gnuastro will break the value into multiple keywords.
 
 @item
 Keyword names ignore case, therefore they are all in capital letters.
-Therefore, if you want to use Grep to inspect these keywords, use the
-@option{-i} option, like the example below.
+Therefore, if you want to use Grep to inspect these keywords, use the 
@option{-i} option, like the example below.
 
 @example
 $ astfits image_detected.fits -h0 | grep -i snquant
 @end example
 @end itemize
 
-The keywords above are classified (separated by an empty line and title) as
-a group titled ``ProgramName configuration''. This meta-data extension, as
-well as all the other extensions (which contain data), also contain have
-final group of keywords to keep the basic date and version information of
-Gnuastro, its dependencies and the pipeline that is using Gnuastro (if its
-under version control).
+The keywords above are classified (separated by an empty line and title) as a 
group titled ``ProgramName configuration''.
+This meta-data extension, as well as all the other extensions (which contain 
data), also contain have final group of keywords to keep the basic date and 
version information of Gnuastro, its dependencies and the pipeline that is 
using Gnuastro (if its under version control).
 
 @table @command
 
 @item DATE
-The creation time of the FITS file. This date is written directly by
-CFITSIO and is in UT format.
+The creation time of the FITS file.
+This date is written directly by CFITSIO and is in UT format.
 
 @item COMMIT
-Git's commit description from the running directory of Gnuastro's
-programs. If the running directory is not version controlled or
-@file{libgit2} isn't installed (see @ref{Optional dependencies}) then this
-keyword will not be present. The printed value is equivalent to the output
-of the following command:
+Git's commit description from the running directory of Gnuastro's programs.
+If the running directory is not version controlled or @file{libgit2} isn't 
installed (see @ref{Optional dependencies}) then this keyword will not be 
present.
+The printed value is equivalent to the output of the following command:
 
 @example
 git describe --dirty --always
 @end example
 
-If the running directory contains non-committed work, then the stored value
-will have a `@command{-dirty}' suffix. This can be very helpful to let you
-know that the data is not ready to be shared with collaborators or
-submitted to a journal. You should only share results that are produced
-after all your work is committed (safely stored in the version controlled
-history and thus reproducible).
-
-At first sight, version control appears to be mainly a tool for software
-developers. However progress in a scientific research is almost identical
-to progress in software development: first you have a rough idea that
-starts with handful of easy steps. But as the first results appear to be
-promising, you will have to extend, or generalize, it to make it more
-robust and work in all the situations your research covers, not just your
-first test samples. Slowly you will find wrong assumptions or bad
-implementations that need to be fixed (`bugs' in software development
-parlance). Finally, when you submit the research to your collaborators or a
-journal, many comments and suggestions will come in, and you have to
-address them.
-
-Software developers have created version control systems precisely for this
-kind of activity. Each significant moment in the project's history is
-called a ``commit'', see @ref{Version controlled source}. A snapshot of the
-project in each ``commit'' is safely stored away, so you can revert back to
-it at a later time, or check changes/progress. This way, you can be sure
-that your work is reproducible and track the progress and history. With
-version control, experimentation in the project's analysis is greatly
-facilitated, since you can easily revert back if a brainstorm test
-procedure fails.
-
-One important feature of version control is that the research result (FITS
-image, table, report or paper) can be stamped with the unique commit
-information that produced it. This information will enable you to exactly
-reproduce that same result later, even if you have made
-changes/progress. For one example of a research paper's reproduction
-pipeline, please see the
-@url{https://gitlab.com/makhlaghi/NoiseChisel-paper, reproduction pipeline}
-of the @url{https://arxiv.org/abs/1505.01664, paper} describing
-@ref{NoiseChisel}.
+If the running directory contains non-committed work, then the stored value 
will have a `@command{-dirty}' suffix.
+This can be very helpful to let you know that the data is not ready to be 
shared with collaborators or submitted to a journal.
+You should only share results that are produced after all your work is 
committed (safely stored in the version controlled history and thus 
reproducible).
+
+At first sight, version control appears to be mainly a tool for software 
developers.
+However progress in a scientific research is almost identical to progress in 
software development: first you have a rough idea that starts with handful of 
easy steps.
+But as the first results appear to be promising, you will have to extend, or 
generalize, it to make it more robust and work in all the situations your 
research covers, not just your first test samples.
+Slowly you will find wrong assumptions or bad implementations that need to be 
fixed (`bugs' in software development parlance).
+Finally, when you submit the research to your collaborators or a journal, many 
comments and suggestions will come in, and you have to address them.
+
+Software developers have created version control systems precisely for this 
kind of activity.
+Each significant moment in the project's history is called a ``commit'', see 
@ref{Version controlled source}.
+A snapshot of the project in each ``commit'' is safely stored away, so you can 
revert back to it at a later time, or check changes/progress.
+This way, you can be sure that your work is reproducible and track the 
progress and history.
+With version control, experimentation in the project's analysis is greatly 
facilitated, since you can easily revert back if a brainstorm test procedure 
fails.
+
+One important feature of version control is that the research result (FITS 
image, table, report or paper) can be stamped with the unique commit 
information that produced it.
+This information will enable you to exactly reproduce that same result later, 
even if you have made changes/progress.
+For one example of a research paper's reproduction pipeline, please see the 
@url{https://gitlab.com/makhlaghi/NoiseChisel-paper, reproduction pipeline} of 
the @url{https://arxiv.org/abs/1505.01664, paper} describing @ref{NoiseChisel}.
 
 @item CFITSIO
 The version of CFITSIO used (see @ref{CFITSIO}).
 
 @item WCSLIB
-The version of WCSLIB used (see @ref{WCSLIB}). Note that older versions of
-WCSLIB do not report the version internally. So this is only available if
-you are using more recent WCSLIB versions.
+The version of WCSLIB used (see @ref{WCSLIB}).
+Note that older versions of WCSLIB do not report the version internally.
+So this is only available if you are using more recent WCSLIB versions.
 
 @item GSL
-The version of GNU Scientific Library that was used, see @ref{GNU
-Scientific Library}.
+The version of GNU Scientific Library that was used, see @ref{GNU Scientific 
Library}.
 
 @item GNUASTRO
 The version of Gnuastro used (see @ref{Version numbering}).
@@ -10101,43 +7505,27 @@ END
 @cindex File operations
 @cindex Operations on files
 @cindex General file operations
-The most low-level and basic property of a dataset is how it is stored. To
-process, archive and transmit the data, you need a container to store it
-first. From the start of the computer age, different formats have been
-defined to store data, optimized for particular applications. One
-format/container can never be useful for all applications: the storage
-defines the application and vice-versa. In astronomy, the Flexible Image
-Transport System (FITS) standard has become the most common format of data
-storage and transmission. It has many useful features, for example multiple
-sub-containers (also known as extensions or header data units, HDUs) within
-one file, or support for tables as well as images. Each HDU can store an
-independent dataset and its corresponding meta-data. Therefore, Gnuastro
-has one program (see @ref{Fits}) specifically designed to manipulate FITS
-HDUs and the meta-data (header keywords) in each HDU.
-
-Your astronomical research does not just involve data analysis (where the
-FITS format is very useful). For example you want to demonstrate your raw
-and processed FITS images or spectra as figures within slides, reports, or
-papers. The FITS format is not defined for such applications. Thus,
-Gnuastro also comes with the ConvertType program (see @ref{ConvertType})
-which can be used to convert a FITS image to and from (where possible)
-other formats like plain text and JPEG (which allow two way conversion),
-along with EPS and PDF (which can only be created from FITS, not the other
-way round).
-
-Finally, the FITS format is not just for images, it can also store
-tables. Binary tables in particular can be very efficient in storing
-catalogs that have more than a few tens of columns and rows. However,
-unlike images (where all elements/pixels have one data type), tables
-contain multiple columns and each column can have different properties:
-independent data types (see @ref{Numeric data types}) and meta-data. In
-practice, each column can be viewed as a separate container that is grouped
-with others in the table. The only shared property of the columns in a table
-is thus the number of elements they contain. To allow easy
-inspection/manipulation of table columns, Gnuastro has the Table program
-(see @ref{Table}). It can be used to select certain table columns in a FITS
-table and see them as a human readable output on the command-line, or to
-save them into another plain text or FITS table.
+The most low-level and basic property of a dataset is how it is stored.
+To process, archive and transmit the data, you need a container to store it 
first.
+From the start of the computer age, different formats have been defined to 
store data, optimized for particular applications.
+One format/container can never be useful for all applications: the storage 
defines the application and vice-versa.
+In astronomy, the Flexible Image Transport System (FITS) standard has become 
the most common format of data storage and transmission.
+It has many useful features, for example multiple sub-containers (also known 
as extensions or header data units, HDUs) within one file, or support for 
tables as well as images.
+Each HDU can store an independent dataset and its corresponding meta-data.
+Therefore, Gnuastro has one program (see @ref{Fits}) specifically designed to 
manipulate FITS HDUs and the meta-data (header keywords) in each HDU.
+
+Your astronomical research does not just involve data analysis (where the FITS 
format is very useful).
+For example you want to demonstrate your raw and processed FITS images or 
spectra as figures within slides, reports, or papers.
+The FITS format is not defined for such applications.
+Thus, Gnuastro also comes with the ConvertType program (see @ref{ConvertType}) 
which can be used to convert a FITS image to and from (where possible) other 
formats like plain text and JPEG (which allow two way conversion), along with 
EPS and PDF (which can only be created from FITS, not the other way round).
+
+Finally, the FITS format is not just for images, it can also store tables.
+Binary tables in particular can be very efficient in storing catalogs that 
have more than a few tens of columns and rows.
+However, unlike images (where all elements/pixels have one data type), tables 
contain multiple columns and each column can have different properties: 
independent data types (see @ref{Numeric data types}) and meta-data.
+In practice, each column can be viewed as a separate container that is grouped 
with others in the table.
+The only shared property of the columns in a table is thus the number of 
elements they contain.
+To allow easy inspection/manipulation of table columns, Gnuastro has the Table 
program (see @ref{Table}).
+It can be used to select certain table columns in a FITS table and see them as 
a human readable output on the command-line, or to save them into another plain 
text or FITS table.
 
 @menu
 * Fits::                        View and manipulate extensions and keywords.
@@ -10154,70 +7542,38 @@ save them into another plain text or FITS table.
 @section Fits
 
 @cindex Vatican library
-The ``Flexible Image Transport System'', or FITS, is by far the most common
-data container format in astronomy and in constant use since the
-1970s. Archiving (future usage, simplicity) has been one of the primary
-design principles of this format. In the last few decades it has proved so
-useful and robust that the Vatican Library has also chosen FITS for its
-``long-term digital preservation''
-project@footnote{@url{https://www.vaticanlibrary.va/home.php?pag=progettodigit}}.
+The ``Flexible Image Transport System'', or FITS, is by far the most common 
data container format in astronomy and in constant use since the 1970s.
+Archiving (future usage, simplicity) has been one of the primary design 
principles of this format.
+In the last few decades it has proved so useful and robust that the Vatican 
Library has also chosen FITS for its ``long-term digital preservation'' 
project@footnote{@url{https://www.vaticanlibrary.va/home.php?pag=progettodigit}}.
 
 @cindex IAU, international astronomical union
-Although the full name of the standard invokes the idea that it is only for
-images, it also contains complete and robust features for tables. It
-started off in the 1970s and was formally published as a standard in 1981,
-it was adopted by the International Astronomical Union (IAU) in 1982 and an
-IAU working group to maintain its future was defined in 1988. The FITS 2.0
-and 3.0 standards were approved in 2000 and 2008 respectively, and the 4.0
-draft has also been released recently, please see the
-@url{https://fits.gsfc.nasa.gov/fits_standard.html, FITS standard document
-webpage} for the full text of all versions. Also see the
-@url{https://doi.org/10.1051/0004-6361/201015362, FITS 3.0 standard paper}
-for a nice introduction and history along with the full standard.
+Although the full name of the standard invokes the idea that it is only for 
images, it also contains complete and robust features for tables.
+It started off in the 1970s and was formally published as a standard in 1981, 
it was adopted by the International Astronomical Union (IAU) in 1982 and an IAU 
working group to maintain its future was defined in 1988.
+The FITS 2.0 and 3.0 standards were approved in 2000 and 2008 respectively, 
and the 4.0 draft has also been released recently, please see the 
@url{https://fits.gsfc.nasa.gov/fits_standard.html, FITS standard document 
webpage} for the full text of all versions.
+Also see the @url{https://doi.org/10.1051/0004-6361/201015362, FITS 3.0 
standard paper} for a nice introduction and history along with the full 
standard.
 
 @cindex Meta-data
-Many common image formats, for example a JPEG, only have one image/dataset
-per file, however one great advantage of the FITS standard is that it
-allows you to keep multiple datasets (images or tables along with their
-separate meta-data) in one file. In the FITS standard, each data + metadata
-is known as an extension, or more formally a header data unit or HDU. The
-HDUs in a file can be completely independent: you can have multiple images
-of different dimensions/sizes or tables as separate extensions in one
-file. However, while the standard doesn't impose any constraints on the
-relation between the datasets, it is strongly encouraged to group data that
-are contextually related with each other in one file. For example an image
-and the table/catalog of objects and their measured properties in that
-image. Other examples can be images of one patch of sky in different colors
-(filters), or one raw telescope image along with its calibration data
-(tables or images).
-
-As discussed above, the extensions in a FITS file can be completely
-independent. To keep some information (meta-data) about the group of
-extensions in the FITS file, the community has adopted the following
-convention: put no data in the first extension, so it is just
-meta-data. This extension can thus be used to store Meta-data regarding the
-whole file (grouping of extensions). Subsequent extensions may contain data
-along with their own separate meta-data. All of Gnuastro's programs also
-follow this convention: the main output dataset(s) are placed in the second
-(or later) extension(s). The first extension contains no data the program's
-configuration (input file name, along with all its option values) are
-stored as its meta-data, see @ref{Output FITS files}.
-
-The meta-data contain information about the data, for example which region
-of the sky an image corresponds to, the units of the data, what telescope,
-camera, and filter the data were taken with, it observation date, or the
-software that produced it and its configuration. Without the meta-data, the
-raw dataset is practically just a collection of numbers and really hard to
-understand, or connect with the real world (other datasets). It is thus
-strongly encouraged to supplement your data (at any level of processing)
-with as much meta-data about your processing/science as possible.
-
-The meta-data of a FITS file is in ASCII format, which can be easily viewed
-or edited with a text editor or on the command-line. Each meta-data element
-(known as a keyword generally) is composed of a name, value, units and
-comments (the last two are optional). For example below you can see three
-FITS meta-data keywords for specifying the world coordinate system (WCS, or
-its location in the sky) of a dataset:
+Many common image formats, for example a JPEG, only have one image/dataset per 
file, however one great advantage of the FITS standard is that it allows you to 
keep multiple datasets (images or tables along with their separate meta-data) 
in one file.
+In the FITS standard, each data + metadata is known as an extension, or more 
formally a header data unit or HDU.
+The HDUs in a file can be completely independent: you can have multiple images 
of different dimensions/sizes or tables as separate extensions in one file.
+However, while the standard doesn't impose any constraints on the relation 
between the datasets, it is strongly encouraged to group data that are 
contextually related with each other in one file.
+For example an image and the table/catalog of objects and their measured 
properties in that image.
+Other examples can be images of one patch of sky in different colors 
(filters), or one raw telescope image along with its calibration data (tables 
or images).
+
+As discussed above, the extensions in a FITS file can be completely 
independent.
+To keep some information (meta-data) about the group of extensions in the FITS 
file, the community has adopted the following convention: put no data in the 
first extension, so it is just meta-data.
+This extension can thus be used to store Meta-data regarding the whole file 
(grouping of extensions).
+Subsequent extensions may contain data along with their own separate meta-data.
+All of Gnuastro's programs also follow this convention: the main output 
dataset(s) are placed in the second (or later) extension(s).
+The first extension contains no data the program's configuration (input file 
name, along with all its option values) are stored as its meta-data, see 
@ref{Output FITS files}.
+
+The meta-data contain information about the data, for example which region of 
the sky an image corresponds to, the units of the data, what telescope, camera, 
and filter the data were taken with, it observation date, or the software that 
produced it and its configuration.
+Without the meta-data, the raw dataset is practically just a collection of 
numbers and really hard to understand, or connect with the real world (other 
datasets).
+It is thus strongly encouraged to supplement your data (at any level of 
processing) with as much meta-data about your processing/science as possible.
+
+The meta-data of a FITS file is in ASCII format, which can be easily viewed or 
edited with a text editor or on the command-line.
+Each meta-data element (known as a keyword generally) is composed of a name, 
value, units and comments (the last two are optional).
+For example below you can see three FITS meta-data keywords for specifying the 
world coordinate system (WCS, or its location in the sky) of a dataset:
 
 @example
 LATPOLE =           -27.805089 / [deg] Native latitude of celestial pole
@@ -10225,21 +7581,13 @@ RADESYS = 'FK5'                / Equatorial coordinate 
system
 EQUINOX =               2000.0 / [yr] Equinox of equatorial coordinates
 @end example
 
-However, there are some limitations which discourage viewing/editing the
-keywords with text editors. For example there is a fixed length of 80
-characters for each keyword (its name, value, units and comments) and there
-are no new-line characters, so on a text editor all the keywords are seen
-in one line. Also, the meta-data keywords are immediately followed by the
-data which are commonly in binary format and will show up as strange
-looking characters on a text editor, and significantly slowing down the
-processor.
+However, there are some limitations which discourage viewing/editing the 
keywords with text editors.
+For example there is a fixed length of 80 characters for each keyword (its 
name, value, units and comments) and there are no new-line characters, so on a 
text editor all the keywords are seen in one line.
+Also, the meta-data keywords are immediately followed by the data which are 
commonly in binary format and will show up as strange looking characters on a 
text editor, and significantly slowing down the processor.
 
-Gnuastro's Fits program was designed to allow easy manipulation of FITS
-extensions and meta-data keywords on the command-line while conforming
-fully with the FITS standard. For example you can copy or cut (copy and
-remove) HDUs/extensions from one FITS file to another, or completely delete
-them. It also has features to delete, add, or edit meta-data keywords
-within one HDU.
+Gnuastro's Fits program was designed to allow easy manipulation of FITS 
extensions and meta-data keywords on the command-line while conforming fully 
with the FITS standard.
+For example you can copy or cut (copy and remove) HDUs/extensions from one 
FITS file to another, or completely delete them.
+It also has features to delete, add, or edit meta-data keywords within one HDU.
 
 @menu
 * Invoking astfits::            Arguments and options to Header.
@@ -10248,9 +7596,8 @@ within one HDU.
 @node Invoking astfits,  , Fits, Fits
 @subsection Invoking Fits
 
-Fits can print or manipulate the FITS file HDUs (extensions), meta-data
-keywords in a given HDU. The executable name is @file{astfits} with the
-following general template
+Fits can print or manipulate the FITS file HDUs (extensions), meta-data 
keywords in a given HDU.
+The executable name is @file{astfits} with the following general template
 
 @example
 $ astfits [OPTION...] ASTRdata
@@ -10287,29 +7634,15 @@ $ astfits --write=MYKEY1,20.00,"An example keyword" 
--write=MYKEY2,fd
 @end example
 
 @cindex HDU
-When no action is requested (and only a file name is given), Fits will
-print a list of information about the extension(s) in the file. This
-information includes the HDU number, HDU name (@code{EXTNAME} keyword),
-type of data (see @ref{Numeric data types}, and the number of data elements
-it contains (size along each dimension for images and table rows and
-columns). You can use this to get a general idea of the contents of the
-FITS file and what HDU to use for further processing, either with the Fits
-program or any other Gnuastro program.
-
-Here is one example of information about a FITS file with four extensions:
-the first extension has no data, it is a purely meta-data HDU (commonly
-used to keep meta-data about the whole file, or grouping of extensions, see
-@ref{Fits}). The second extension is an image with name @code{IMAGE} and
-single precision floating point type (@code{float32}, see @ref{Numeric data
-types}), it has 4287 pixels along its first (horizontal) axis and 4286
-pixels along its second (vertical) axis. The third extension is also an
-image with name @code{MASK}. It is in 2-byte integer format (@code{int16})
-which is commonly used to keep information about pixels (for example to
-identify which ones were saturated, or which ones had cosmic rays and so
-on), note how it has the same size as the @code{IMAGE} extension. The third
-extension is a binary table called @code{CATALOG} which has 12371 rows and
-5 columns (it probably contains information about the sources in the
-image).
+When no action is requested (and only a file name is given), Fits will print a 
list of information about the extension(s) in the file.
+This information includes the HDU number, HDU name (@code{EXTNAME} keyword), 
type of data (see @ref{Numeric data types}, and the number of data elements it 
contains (size along each dimension for images and table rows and columns).
+You can use this to get a general idea of the contents of the FITS file and 
what HDU to use for further processing, either with the Fits program or any 
other Gnuastro program.
+
+Here is one example of information about a FITS file with four extensions: the 
first extension has no data, it is a purely meta-data HDU (commonly used to 
keep meta-data about the whole file, or grouping of extensions, see @ref{Fits}).
+The second extension is an image with name @code{IMAGE} and single precision 
floating point type (@code{float32}, see @ref{Numeric data types}), it has 4287 
pixels along its first (horizontal) axis and 4286 pixels along its second 
(vertical) axis.
+The third extension is also an image with name @code{MASK}.
+It is in 2-byte integer format (@code{int16}) which is commonly used to keep 
information about pixels (for example to identify which ones were saturated, or 
which ones had cosmic rays and so on), note how it has the same size as the 
@code{IMAGE} extension.
+The third extension is a binary table called @code{CATALOG} which has 12371 
rows and 5 columns (it probably contains information about the sources in the 
image).
 
 @example
 GNU Astronomy Utilities X.X
@@ -10327,26 +7660,16 @@ HDU (extension) information: `image.fits'.
 3      CATALOG         table_binary    12371x5
 @end example
 
-If a specific HDU is identified on the command-line with the @option{--hdu}
-(or @option{-h} option) and no operation requested, then the full list of
-header keywords in that HDU will be printed (as if the
-@option{--printallkeys} was called, see below). It is important to remember
-that this only occurs when @option{--hdu} is given on the command-line. The
-@option{--hdu} value given in a configuration file will only be used when a
-specific operation on keywords requested. Therefore as described in the
-paragraphs above, when no explicit call to the @option{--hdu} option is
-made on the command-line and no operation is requested (on the command-line
-or configuration files), the basic information of each HDU/extension is
-printed.
-
-The operating mode and input/output options to Fits are similar to the
-other programs and fully described in @ref{Common options}. The options
-particular to Fits can be divided into two groups: 1) those related to
-modifying HDUs or extensions (see @ref{HDU manipulation}), and 2) those
-related to viewing/modifying meta-data keywords (see @ref{Keyword
-manipulation}). These two classes of options cannot be called together in
-one run: you can either work on the extensions or meta-data keywords in any
-instance of Fits.
+If a specific HDU is identified on the command-line with the @option{--hdu} 
(or @option{-h} option) and no operation requested, then the full list of 
header keywords in that HDU will be printed (as if the @option{--printallkeys} 
was called, see below).
+It is important to remember that this only occurs when @option{--hdu} is given 
on the command-line.
+The @option{--hdu} value given in a configuration file will only be used when 
a specific operation on keywords requested.
+Therefore as described in the paragraphs above, when no explicit call to the 
@option{--hdu} option is made on the command-line and no operation is requested 
(on the command-line or configuration files), the basic information of each 
HDU/extension is printed.
+
+The operating mode and input/output options to Fits are similar to the other 
programs and fully described in @ref{Common options}.
+The options particular to Fits can be divided into two groups:
+1) those related to modifying HDUs or extensions (see @ref{HDU manipulation}), 
and
+2) those related to viewing/modifying meta-data keywords (see @ref{Keyword 
manipulation}).
+These two classes of options cannot be called together in one run: you can 
either work on the extensions or meta-data keywords in any instance of Fits.
 
 @menu
 * HDU manipulation::            Manipulate HDUs within a FITS file.
@@ -10359,39 +7682,29 @@ instance of Fits.
 
 @node HDU manipulation, Keyword manipulation, Invoking astfits, Invoking 
astfits
 @subsubsection HDU manipulation
-Each header data unit, or HDU (also known as an extension), in a FITS file
-is an independent dataset (data + meta-data). Multiple HDUs can be stored
-in one FITS file, see @ref{Fits}. The HDU modifying options to the Fits
-program are listed below.
-
-These options may be called multiple times in one run. If so, the
-extensions will be copied from the input FITS file to the output FITS file
-in the given order (on the command-line and also in configuration files,
-see @ref{Configuration file precedence}). If the separate classes are
-called together in one run of Fits, then first @option{--copy} is run (on
-all specified HDUs), followed by @option{--cut} (again on all specified
-HDUs), and then @option{--remove} (on all specified HDUs).
-
-The @option{--copy} and @option{--cut} options need an output FITS file
-(specified with the @option{--output} option). If the output file exists,
-then the specified HDU will be copied following the last extension of the
-output file (the existing HDUs in it will be untouched). Thus, after Fits
-finishes, the copied HDU will be the last HDU of the output file. If no
-output file name is given, then automatic output will be used to store the
-HDUs given to this option (see @ref{Automatic output}).
+Each header data unit, or HDU (also known as an extension), in a FITS file is 
an independent dataset (data + meta-data).
+Multiple HDUs can be stored in one FITS file, see @ref{Fits}.
+The HDU modifying options to the Fits program are listed below.
+
+These options may be called multiple times in one run.
+If so, the extensions will be copied from the input FITS file to the output 
FITS file in the given order (on the command-line and also in configuration 
files, see @ref{Configuration file precedence}).
+If the separate classes are called together in one run of Fits, then first 
@option{--copy} is run (on all specified HDUs), followed by @option{--cut} 
(again on all specified HDUs), and then @option{--remove} (on all specified 
HDUs).
+
+The @option{--copy} and @option{--cut} options need an output FITS file 
(specified with the @option{--output} option).
+If the output file exists, then the specified HDU will be copied following the 
last extension of the output file (the existing HDUs in it will be untouched).
+Thus, after Fits finishes, the copied HDU will be the last HDU of the output 
file.
+If no output file name is given, then automatic output will be used to store 
the HDUs given to this option (see @ref{Automatic output}).
 
 @table @option
 
 @item -n
 @itemx --numhdus
-Print the number of extensions/HDUs in the given file. Note that this
-option must be called alone and will only print a single number. It is thus
-useful in scripts, for example when you need to do check the number of
-extensions in a FITS file.
+Print the number of extensions/HDUs in the given file.
+Note that this option must be called alone and will only print a single number.
+It is thus useful in scripts, for example when you need to do check the number 
of extensions in a FITS file.
 
-For a complete list of basic meta-data on the extensions in a FITS file,
-don't use any of the options in this section or in @ref{Keyword
-manipulation}. For more, see @ref{Invoking astfits}.
+For a complete list of basic meta-data on the extensions in a FITS file, don't 
use any of the options in this section or in @ref{Keyword manipulation}.
+For more, see @ref{Invoking astfits}.
 
 @item -C STR
 @itemx --copy=STR
@@ -10406,51 +7719,39 @@ output file, see explanations above.
 @itemx --remove=STR
 Remove the specified HDU from the input file.
 
-The first (zero-th) HDU cannot be removed with this option. Consider using
-@option{--copy} or @option{--cut} in combination with
-@option{primaryimghdu} to not have an empty zero-th HDU. From CFITSIO: ``In
-the case of deleting the primary array (the first HDU in the file) then
-[it] will be replaced by a null primary array containing the minimum set of
-required keywords and no data.''. So in practice, any existing data (array)
-and meta-data in the first extension will be removed, but the number of
-extensions in the file won't change. This is because of the unique position
-the first FITS extension has in the FITS standard (for example it cannot be
-used to store tables).
+The first (zero-th) HDU cannot be removed with this option.
+Consider using @option{--copy} or @option{--cut} in combination with 
@option{primaryimghdu} to not have an empty zero-th HDU.
+From CFITSIO: ``In the case of deleting the primary array (the first HDU in 
the file) then [it] will be replaced by a null primary array containing the 
minimum set of required keywords and no data.''.
+So in practice, any existing data (array) and meta-data in the first extension 
will be removed, but the number of extensions in the file won't change.
+This is because of the unique position the first FITS extension has in the 
FITS standard (for example it cannot be used to store tables).
 
 @item --primaryimghdu
-Copy or cut an image HDU to the zero-th HDU/extension a file that doesn't
-yet exist. This option is thus irrelevant if the output file already exists
-or the copied/cut extension is a FITS table.
+Copy or cut an image HDU to the zero-th HDU/extension a file that doesn't yet 
exist.
+This option is thus irrelevant if the output file already exists or the 
copied/cut extension is a FITS table.
 
 @end table
 
 
 @node Keyword manipulation,  , HDU manipulation, Invoking astfits
 @subsubsection Keyword manipulation
-The meta-data in each header data unit, or HDU (also known as extension,
-see @ref{Fits}) is stored as ``keyword''s. Each keyword consists of a name,
-value, unit, and comments. The Fits program (see @ref{Fits}) options
-related to viewing and manipulating keywords in a FITS HDU are described
-below.
-
-To see the full list of keywords in a FITS HDU, you can use the
-@option{--printallkeys} option. If any of the keywords are to be modified,
-the headers of the input file will be changed. If you want to keep the
-original FITS file or HDU, it is easiest to create a copy first and then
-run Fits on that. In the FITS standard, keywords are always uppercase. So
-case does not matter in the input or output keyword names you specify.
-
-Most of the options can accept multiple instances in one command. For
-example you can add multiple keywords to delete by calling
-@option{--delete} multiple times, since repeated keywords are allowed, you
-can even delete the same keyword multiple times. The action of such options
-will start from the top most keyword.
-
-The precedence of operations are described below. Note that while the order
-within each class of actions is preserved, the order of individual actions
-is not. So irrespective of what order you called @option{--delete} and
-@option{--update}. First, all the delete operations are going to take
-effect then the update operations.
+The meta-data in each header data unit, or HDU (also known as extension, see 
@ref{Fits}) is stored as ``keyword''s.
+Each keyword consists of a name, value, unit, and comments.
+The Fits program (see @ref{Fits}) options related to viewing and manipulating 
keywords in a FITS HDU are described below.
+
+To see the full list of keywords in a FITS HDU, you can use the 
@option{--printallkeys} option.
+If any of the keywords are to be modified, the headers of the input file will 
be changed.
+If you want to keep the original FITS file or HDU, it is easiest to create a 
copy first and then run Fits on that.
+In the FITS standard, keywords are always uppercase.
+So case does not matter in the input or output keyword names you specify.
+
+Most of the options can accept multiple instances in one command.
+For example you can add multiple keywords to delete by calling 
@option{--delete} multiple times, since repeated keywords are allowed, you can 
even delete the same keyword multiple times.
+The action of such options will start from the top most keyword.
+
+The precedence of operations are described below.
+Note that while the order within each class of actions is preserved, the order 
of individual actions is not.
+So irrespective of what order you called @option{--delete} and 
@option{--update}.
+First, all the delete operations are going to take effect then the update 
operations.
 @enumerate
 @item
 @option{--delete}
@@ -10476,18 +7777,14 @@ effect then the update operations.
 @option{--copykeys}
 @end enumerate
 @noindent
-All possible syntax errors will be reported before the keywords are
-actually written. FITS errors during any of these actions will be reported,
-but Fits won't stop until all the operations are complete. If
-@option{--quitonerror} is called, then Fits will immediately stop upon the
-first error.
+All possible syntax errors will be reported before the keywords are actually 
written.
+FITS errors during any of these actions will be reported, but Fits won't stop 
until all the operations are complete.
+If @option{--quitonerror} is called, then Fits will immediately stop upon the 
first error.
 
 @cindex GNU Grep
-If you want to inspect only a certain set of header keywords, it is easiest
-to pipe the output of the Fits program to GNU Grep. Grep is a very powerful
-and advanced tool to search strings which is precisely made for such
-situations. For example if you only want to check the size of an image FITS
-HDU, you can run:
+If you want to inspect only a certain set of header keywords, it is easiest to 
pipe the output of the Fits program to GNU Grep.
+Grep is a very powerful and advanced tool to search strings which is precisely 
made for such situations.
+For example if you only want to check the size of an image FITS HDU, you can 
run:
 
 @example
 $ astfits input.fits | grep NAXIS
@@ -10495,14 +7792,10 @@ $ astfits input.fits | grep NAXIS
 
 @cartouche
 @noindent
-@strong{FITS STANDARD KEYWORDS:} Some header keywords are necessary
-for later operations on a FITS file, for example BITPIX or NAXIS, see
-the FITS standard for their full list. If you modify (for example
-remove or rename) such keywords, the FITS file extension might not be
-usable any more. Also be careful for the world coordinate system
-keywords, if you modify or change their values, any future world
-coordinate system (like RA and Dec) measurements on the image will
-also change.
+@strong{FITS STANDARD KEYWORDS:}
+Some header keywords are necessary for later operations on a FITS file, for 
example BITPIX or NAXIS, see the FITS standard for their full list.
+If you modify (for example remove or rename) such keywords, the FITS file 
extension might not be usable any more.
+Also be careful for the world coordinate system keywords, if you modify or 
change their values, any future world coordinate system (like RA and Dec) 
measurements on the image will also change.
 @end cartouche
 
 
@@ -10512,14 +7805,11 @@ The keyword related options to the Fits program are 
fully described below.
 
 @item -a STR
 @itemx --asis=STR
-Write @option{STR} exactly into the FITS file header with no
-modifications. If it does not conform to the FITS standards, then it might
-cause trouble, so please be very careful with this option. If you want to
-define the keyword from scratch, it is best to use the @option{--write}
-option (see below) and let CFITSIO worry about the standards. The best way
-to use this option is when you want to add a keyword from one FITS file to
-another unchanged and untouched. Below is an example of such a case that
-can be very useful sometimes (on the command-line or in scripts):
+Write @option{STR} exactly into the FITS file header with no modifications.
+If it does not conform to the FITS standards, then it might cause trouble, so 
please be very careful with this option.
+If you want to define the keyword from scratch, it is best to use the 
@option{--write} option (see below) and let CFITSIO worry about the standards.
+The best way to use this option is when you want to add a keyword from one 
FITS file to another unchanged and untouched.
+Below is an example of such a case that can be very useful sometimes (on the 
command-line or in scripts):
 
 @example
 $ key=$(astfits firstimage.fits | grep KEYWORD)
@@ -10527,43 +7817,32 @@ $ astfits --asis="$key" secondimage.fits
 @end example
 
 @cindex GNU Bash
-In particular note the double quotation signs (@key{"}) around the
-reference to the @command{key} shell variable (@command{$key}), since FITS
-keywords usually have lots of space characters, if this variable is not
-quoted, the shell will only give the first word in the full keyword to this
-option, which will definitely be a non-standard FITS keyword and will make
-it hard to work on the file afterwords. See the ``Quoting'' section of the
-GNU Bash manual for more information if your keyword has the special
-characters @key{$}, @key{`}, or @key{\}.
+In particular note the double quotation signs (@key{"}) around the reference 
to the @command{key} shell variable (@command{$key}).
+FITS keywords usually have lots of space characters, if this variable is not 
quoted, the shell will only give the first word in the full keyword to this 
option, which will definitely be a non-standard FITS keyword and will make it 
hard to work on the file afterwords.
+See the ``Quoting'' section of the GNU Bash manual for more information if 
your keyword has the special characters @key{$}, @key{`}, or @key{\}.
 
 @item -d STR
 @itemx --delete=STR
-Delete one instance of the @option{STR} keyword from the FITS
-header. Multiple instances of @option{--delete} can be given (possibly even
-for the same keyword, when its repeated in the meta-data). All keywords
-given will be removed from the headers in the same given order. If the
-keyword doesn't exist, Fits will give a warning and return with a non-zero
-value, but will not stop. To stop as soon as an error occurs, run with
-@option{--quitonerror}.
+Delete one instance of the @option{STR} keyword from the FITS header.
+Multiple instances of @option{--delete} can be given (possibly even for the 
same keyword, when its repeated in the meta-data).
+All keywords given will be removed from the headers in the same given order.
+If the keyword doesn't exist, Fits will give a warning and return with a 
non-zero value, but will not stop.
+To stop as soon as an error occurs, run with @option{--quitonerror}.
 
 @item -r STR
 @itemx --rename=STR
-Rename a keyword to a new value. @option{STR} contains both the existing
-and new names, which should be separated by either a comma (@key{,}) or a
-space character. Note that if you use a space character, you have to put
-the value to this option within double quotation marks (@key{"}) so the
-space character is not interpreted as an option separator. Multiple
-instances of @option{--rename} can be given in one command. The keywords
-will be renamed in the specified order. If the keyword doesn't exist, Fits
-will give a warning and return with a non-zero value, but will not stop. To
-stop as soon as an error occurs, run with @option{--quitonerror}.
+Rename a keyword to a new value.
+@option{STR} contains both the existing and new names, which should be 
separated by either a comma (@key{,}) or a space character.
+Note that if you use a space character, you have to put the value to this 
option within double quotation marks (@key{"}) so the space character is not 
interpreted as an option separator.
+Multiple instances of @option{--rename} can be given in one command.
+The keywords will be renamed in the specified order.
+If the keyword doesn't exist, Fits will give a warning and return with a 
non-zero value, but will not stop.
+To stop as soon as an error occurs, run with @option{--quitonerror}.
 
 @item -u STR
 @itemx --update=STR
-Update a keyword, its value, its comments and its units in the format
-described below. If there are multiple instances of the keyword in the
-header, they will be changed from top to bottom (with multiple
-@option{--update} options).
+Update a keyword, its value, its comments and its units in the format 
described below.
+If there are multiple instances of the keyword in the header, they will be 
changed from top to bottom (with multiple @option{--update} options).
 
 @noindent
 The format of the values to this option can best be specified with an
@@ -10573,50 +7852,36 @@ example:
 --update=KEYWORD,value,"comments for this keyword",unit
 @end example
 
-If there is a writing error, Fits will give a warning and return with a
-non-zero value, but will not stop. To stop as soon as an error occurs, run
-with @option{--quitonerror}.
-
-@noindent
-The value can be any numerical or string value@footnote{Some tricky
-situations arise with values like `@command{87095e5}', if this was intended
-to be a number it will be kept in the header as @code{8709500000} and there
-is no problem. But this can also be a shortened Git commit hash. In the
-latter case, it should be treated as a string and stored as it is
-written. Commit hashes are very important in keeping the history of a file
-during your research and such values might arise without you noticing them
-in your reproduction pipeline. One solution is to use @command{git
-describe} instead of the short hash alone. A less recommended solution is
-to add a space after the commit hash and Fits will write the value as
-`@command{87095e5 }' in the header. If you later compare the strings on the
-shell, the space character will be ignored by the shell in the latter
-solution and there will be no problem.}. Other than the @code{KEYWORD}, all
-the other values are optional. To leave a given token empty, follow the
-preceding comma (@key{,}) immediately with the next. If any space character
-is present around the commas, it will be considered part of the respective
-token. So if more than one token has space characters within it, the safest
-method to specify a value to this option is to put double quotation marks
-around each individual token that needs it. Note that without double
-quotation marks, space characters will be seen as option separators and can
-lead to undefined behavior.
+If there is a writing error, Fits will give a warning and return with a 
non-zero value, but will not stop.
+To stop as soon as an error occurs, run with @option{--quitonerror}.
+
+@noindent
+The value can be any numerical or string value@footnote{Some tricky situations 
arise with values like `@command{87095e5}', if this was intended to be a number 
it will be kept in the header as @code{8709500000} and there is no problem.
+But this can also be a shortened Git commit hash.
+In the latter case, it should be treated as a string and stored as it is 
written.
+Commit hashes are very important in keeping the history of a file during your 
research and such values might arise without you noticing them in your 
reproduction pipeline.
+One solution is to use @command{git describe} instead of the short hash alone.
+A less recommended solution is to add a space after the commit hash and Fits 
will write the value as `@command{87095e5 }' in the header.
+If you later compare the strings on the shell, the space character will be 
ignored by the shell in the latter solution and there will be no problem.}.
+Other than the @code{KEYWORD}, all the other values are optional.
+To leave a given token empty, follow the preceding comma (@key{,}) immediately 
with the next.
+If any space character is present around the commas, it will be considered 
part of the respective token.
+So if more than one token has space characters within it, the safest method to 
specify a value to this option is to put double quotation marks around each 
individual token that needs it.
+Note that without double quotation marks, space characters will be seen as 
option separators and can lead to undefined behavior.
 
 @item -w STR
 @itemx --write=STR
-Write a keyword to the header. For the possible value input formats,
-comments and units for the keyword, see the @option{--update} option
-above. The special names (first string) below will cause a special
-behavior:
+Write a keyword to the header.
+For the possible value input formats, comments and units for the keyword, see 
the @option{--update} option above.
+The special names (first string) below will cause a special behavior:
 
 @table @option
 
 @item /
-Write a ``title'' to the list of keywords. A title consists of one blank
-line and another which is blank for several spaces and starts with a slash
-(@key{/}). The second string given to this option is the ``title'' or
-string printed after the slash. For example with the command below you can
-add a ``title'' of `My keywords' after the existing keywords and add the
-subsequent @code{K1} and @code{K2} keywords under it (note that keyword
-names are not case sensitive).
+Write a ``title'' to the list of keywords.
+A title consists of one blank line and another which is blank for several 
spaces and starts with a slash (@key{/}).
+The second string given to this option is the ``title'' or string printed 
after the slash.
+For example with the command below you can add a ``title'' of `My keywords' 
after the existing keywords and add the subsequent @code{K1} and @code{K2} 
keywords under it (note that keyword names are not case sensitive).
 
 @example
 $ astfits test.fits -h1 --write=/,"My keywords" \
@@ -10631,19 +7896,14 @@ K2      =                 4.56 / My second keyword
 END
 @end example
 
-Adding a ``title'' before each contextually separate group of header
-keywords greatly helps in readability and visual inspection of the
-keywords. So generally, when you want to add new FITS keywords, its good
-practice to also add a title before them.
+Adding a ``title'' before each contextually separate group of header keywords 
greatly helps in readability and visual inspection of the keywords.
+So generally, when you want to add new FITS keywords, its good practice to 
also add a title before them.
 
-The reason you need to use @key{/} as the keyword name for setting a title
-is that @key{/} is the first non-white character.
+The reason you need to use @key{/} as the keyword name for setting a title is 
that @key{/} is the first non-white character.
 
-The title(s) is(are) written into the FITS with the same order that
-@option{--write} is called. Therefore in one run of the Fits program, you
-can specify many different titles (with their own keywords under them). For
-example the command below that builds on the previous example and adds
-another group of keywords named @code{A1} and @code{A2}.
+The title(s) is(are) written into the FITS with the same order that 
@option{--write} is called.
+Therefore in one run of the Fits program, you can specify many different 
titles (with their own keywords under them).
+For example the command below that builds on the previous example and adds 
another group of keywords named @code{A1} and @code{A2}.
 
 @example
 $ astfits test.fits -h1 --write=/,"My keywords"   \
@@ -10658,97 +7918,70 @@ $ astfits test.fits -h1 --write=/,"My keywords"   \
 @cindex CFITSIO
 @cindex @code{DATASUM}: FITS keyword
 @cindex @code{CHECKSUM}: FITS keyword
-When nothing is given afterwards, the header integrity
-keywords@footnote{Section 4.4.2.7 (page 15) of
-@url{https://fits.gsfc.nasa.gov/standard40/fits_standard40aa-le.pdf}}
-@code{DATASUM} and @code{CHECKSUM} will be written/updated. They are
-calculated and written by CFITSIO. They thus comply with the FITS standard
-4.0 that defines these keywords.
-
-If a value is given (for example
-@option{--write=checksum,my-own-checksum,"my checksum"}), then CFITSIO
-won't be called to calculate these two keywords and the value (as well as
-possible comment and unit) will be written just like any other
-keyword. This is generally not recommended, but necessary in special
-circumstances (where the checksum needs to be manually updated for
-example).
-
-@code{DATASUM} only depends on the data section of the HDU/extension, so it
-is not changed when you update the keywords. But @code{CHECKSUM} also
-depends on the header and will not be valid if you make any further changes
-to the header. This includes any further keyword modification options in
-the same call to the Fits program. Therefore it is recommended to write
-these keywords as the last keywords that are written/modified in the
-extension. You can use the @option{--verify} option (described below) to
-verify the values of these two keywords.
+When nothing is given afterwards, the header integrity 
keywords@footnote{Section 4.4.2.7 (page 15) of 
@url{https://fits.gsfc.nasa.gov/standard40/fits_standard40aa-le.pdf}} 
@code{DATASUM} and @code{CHECKSUM} will be written/updated.
+They are calculated and written by CFITSIO.
+They thus comply with the FITS standard 4.0 that defines these keywords.
+
+If a value is given (for example @option{--write=checksum,my-own-checksum,"my 
checksum"}), then CFITSIO won't be called to calculate these two keywords and 
the value (as well as possible comment and unit) will be written just like any 
other keyword.
+This is generally not recommended, but necessary in special circumstances 
(where the checksum needs to be manually updated for example).
+
+@code{DATASUM} only depends on the data section of the HDU/extension, so it is 
not changed when you update the keywords.
+But @code{CHECKSUM} also depends on the header and will not be valid if you 
make any further changes to the header.
+This includes any further keyword modification options in the same call to the 
Fits program.
+Therefore it is recommended to write these keywords as the last keywords that 
are written/modified in the extension.
+You can use the @option{--verify} option (described below) to verify the 
values of these two keywords.
 
 @item datasum
-Similar to @option{checksum}, but only write the @code{DATASUM} keyword
-(that doesn't depend on the header keywords, only the data).
+Similar to @option{checksum}, but only write the @code{DATASUM} keyword (that 
doesn't depend on the header keywords, only the data).
 @end table
 
 @item -H STR
 @itemx --history STR
-Add a @code{HISTORY} keyword to the header with the given value. A new
-@code{HISTORY} keyword will be created for every instance of this
-option. If the string given to this option is longer than 70 characters, it
-will be separated into multiple keyword cards. If there is an error, Fits
-will give a warning and return with a non-zero value, but will not stop. To
-stop as soon as an error occurs, run with @option{--quitonerror}.
+Add a @code{HISTORY} keyword to the header with the given value. A new 
@code{HISTORY} keyword will be created for every instance of this option. If 
the string given to this option is longer than 70 characters, it will be 
separated into multiple keyword cards. If there is an error, Fits will give a 
warning and return with a non-zero value, but will not stop. To stop as soon as 
an error occurs, run with @option{--quitonerror}.
 
 @item -c STR
 @itemx --comment STR
-Add a @code{COMMENT} keyword to the header with the given value. Similar to
-the explanation for @option{--history} above.
+Add a @code{COMMENT} keyword to the header with the given value.
+Similar to the explanation for @option{--history} above.
 
 @item -t
 @itemx --date
-Put the current date and time in the header. If the @code{DATE} keyword
-already exists in the header, it will be updated. If there is a writing
-error, Fits will give a warning and return with a non-zero value, but will
-not stop. To stop as soon as an error occurs, run with
-@option{--quitonerror}.
+Put the current date and time in the header.
+If the @code{DATE} keyword already exists in the header, it will be updated.
+If there is a writing error, Fits will give a warning and return with a 
non-zero value, but will not stop.
+To stop as soon as an error occurs, run with @option{--quitonerror}.
 
 @item -p
 @itemx --printallkeys
-Print all the keywords in the specified FITS extension (HDU) on the
-command-line. If this option is called along with any of the other keyword
-editing commands, as described above, all other editing commands take
-precedence to this. Therefore, it will print the final keywords after all
-the editing has been done.
+Print all the keywords in the specified FITS extension (HDU) on the 
command-line.
+If this option is called along with any of the other keyword editing commands, 
as described above, all other editing commands take precedence to this.
+Therefore, it will print the final keywords after all the editing has been 
done.
 
 @item -v
 @itemx --verify
-Verify the @code{DATASUM} and @code{CHECKSUM} data integrity keywords of
-the FITS standard. See the description under the @code{checksum} (under
-@option{--write}, above) for more on these keywords.
+Verify the @code{DATASUM} and @code{CHECKSUM} data integrity keywords of the 
FITS standard.
+See the description under the @code{checksum} (under @option{--write}, above) 
for more on these keywords.
 
-This option will print @code{Verified} for both keywords if they can be
-verified. Otherwise, if they don't exist in the given HDU/extension, it
-will print @code{NOT-PRESENT}, and if they cannot be verified it will print
-@code{INCORRECT}. In the latter case (when the keyword values exist but
-can't be verified), the Fits program will also return with a failure.
+This option will print @code{Verified} for both keywords if they can be 
verified.
+Otherwise, if they don't exist in the given HDU/extension, it will print 
@code{NOT-PRESENT}, and if they cannot be verified it will print 
@code{INCORRECT}.
+In the latter case (when the keyword values exist but can't be verified), the 
Fits program will also return with a failure.
 
-By default this function will also print a short description of the
-@code{DATASUM} AND @code{CHECKSUM} keywords. You can suppress this extra
-information with @code{--quiet} option.
+By default this function will also print a short description of the 
@code{DATASUM} AND @code{CHECKSUM} keywords.
+You can suppress this extra information with @code{--quiet} option.
 
 @item --copykeys=INT:INT
 Copy the input's keyword records in the given range (inclusive) to the
 output HDU (specified with the @option{--output} and @option{--outhdu}
 options, for the filename and HDU/extension respectively).
 
-The given string to this option must be two integers separated by a colon
-(@key{:}). The first integer must be positive (counting of the keyword
-records starts from 1). The second integer may be negative (zero is not
-acceptable) or an integer larger than the first.
+The given string to this option must be two integers separated by a colon 
(@key{:}).
+The first integer must be positive (counting of the keyword records starts 
from 1).
+The second integer may be negative (zero is not acceptable) or an integer 
larger than the first.
 
-A negative second integer means counting from the end. So @code{-1} is the
-last copy-able keyword (not including the @code{END} keyword).
+A negative second integer means counting from the end.
+So @code{-1} is the last copy-able keyword (not including the @code{END} 
keyword).
 
-To see the header keywords of the input with a number before them, you can
-pipe the output of the FITS program (when it prints all the keywords in an
-extension) into the @command{cat} program like below:
+To see the header keywords of the input with a number before them, you can 
pipe the output of the FITS program (when it prints all the keywords in an 
extension) into the @command{cat} program like below:
 
 @example
 $ astfits input.fits -h1 | cat -n
@@ -10759,36 +7992,24 @@ The HDU/extension to write the output keywords of 
@option{--copykeys}.
 
 @item -Q
 @itemx --quitonerror
-Quit if any of the operations above are not successful. By default if
-an error occurs, Fits will warn the user of the faulty keyword and
-continue with the rest of actions.
+Quit if any of the operations above are not successful.
+By default if an error occurs, Fits will warn the user of the faulty keyword 
and continue with the rest of actions.
 
 @item -s STR
 @itemx --datetosec STR
 @cindex Unix epoch time
 @cindex Time, Unix epoch
 @cindex Epoch, Unix time
-Interpret the value of the given keyword in the FITS date format (most
-generally: @code{YYYY-MM-DDThh:mm:ss.ddd...}) and return the corresponding
-Unix epoch time (number of seconds that have passed since 00:00:00
-Thursday, January 1st, 1970). The @code{Thh:mm:ss.ddd...} section
-(specifying the time of day), and also the @code{.ddd...} (specifying the
-fraction of a second) are optional. The value to this option must be the
-FITS keyword name that contains the requested date, for example
-@option{--datetosec=DATE-OBS}.
+Interpret the value of the given keyword in the FITS date format (most 
generally: @code{YYYY-MM-DDThh:mm:ss.ddd...}) and return the corresponding Unix 
epoch time (number of seconds that have passed since 00:00:00 Thursday, January 
1st, 1970).
+The @code{Thh:mm:ss.ddd...} section (specifying the time of day), and also the 
@code{.ddd...} (specifying the fraction of a second) are optional.
+The value to this option must be the FITS keyword name that contains the 
requested date, for example @option{--datetosec=DATE-OBS}.
 
 @cindex GNU C Library
-This option can also interpret the older FITS date format
-(@code{DD/MM/YYThh:mm:ss.ddd...}) where only two characters are given to
-the year. In this case (following the GNU C Library), this option will make
-the following assumption: values 68 to 99 correspond to the years 1969 to
-1999, and values 0 to 68 as the years 2000 to 2068.
+This option can also interpret the older FITS date format 
(@code{DD/MM/YYThh:mm:ss.ddd...}) where only two characters are given to the 
year.
+In this case (following the GNU C Library), this option will make the 
following assumption: values 68 to 99 correspond to the years 1969 to 1999, and 
values 0 to 68 as the years 2000 to 2068.
 
-This is a very useful option for operations on the FITS date values, for
-example sorting FITS files by their dates, or finding the time difference
-between two FITS files. The advantage of working with the Unix epoch time
-is that you don't have to worry about calendar details (for example the
-number of days in different months, or leap years, and etc).
+This is a very useful option for operations on the FITS date values, for 
example sorting FITS files by their dates, or finding the time difference 
between two FITS files.
+The advantage of working with the Unix epoch time is that you don't have to 
worry about calendar details (for example the number of days in different 
months, or leap years, etc).
 @end table
 
 
@@ -10814,63 +8035,38 @@ number of days in different months, or leap years, and 
etc).
 @section Sort FITS files by night
 
 @cindex Calendar
-FITS images usually contain (several) keywords for preserving important
-dates. In particular, for lower-level data, this is usually the observation
-date and time (for example, stored in the @code{DATE-OBS} keyword
-value). When analyzing observed datasets, many calibration steps (like the
-dark, bias or flat-field), are commonly calculated on a per-observing-night
-basis.
-
-However, the FITS standard's date format (@code{YYYY-MM-DDThh:mm:ss.ddd})
-is based on the western (Gregorian) calendar. Dates that are stored in this
-format are complicated for automatic processing: a night starts in the
-final hours of one calendar day, and extends to the early hours of the next
-calendar day. As a result, to identify datasets from one night, we commonly
-need to search for two dates. However calendar peculiarities can make this
-identification very difficult. For example when an observation is done on
-the night separating two months (like the night starting on March 31st and
-going into April 1st), or two years (like the night starting on December
-31st 2018 and going into January 1st, 2019). To account for such
-situations, it is necessary to keep track of how many days are in a month,
-and leap years, and etc.
+FITS images usually contain (several) keywords for preserving important dates.
+In particular, for lower-level data, this is usually the observation date and 
time (for example, stored in the @code{DATE-OBS} keyword value).
+When analyzing observed datasets, many calibration steps (like the dark, bias 
or flat-field), are commonly calculated on a per-observing-night basis.
+
+However, the FITS standard's date format (@code{YYYY-MM-DDThh:mm:ss.ddd}) is 
based on the western (Gregorian) calendar.
+Dates that are stored in this format are complicated for automatic processing: 
a night starts in the final hours of one calendar day, and extends to the early 
hours of the next calendar day.
+As a result, to identify datasets from one night, we commonly need to search 
for two dates.
+However calendar peculiarities can make this identification very difficult.
+For example when an observation is done on the night separating two months 
(like the night starting on March 31st and going into April 1st), or two years 
(like the night starting on December 31st 2018 and going into January 1st, 
2019).
+To account for such situations, it is necessary to keep track of how many days 
are in a month, and leap years, etc.
 
 @cindex Unix epoch time
 @cindex Time, Unix epoch
 @cindex Epoch, Unix time
-Gnuastro's @file{astscript-sort-by-night} script is created to help in such
-important scenarios. It uses @ref{Fits} to convert the FITS date format
-into the Unix epoch time (number of seconds since 00:00:00 of January 1st,
-1970), using the @option{--datetosec} option. The Unix epoch time is a
-single number (integer, if not given in sub-second precision), enabling
-easy comparison and sorting of dates after January 1st, 1970.
+Gnuastro's @file{astscript-sort-by-night} script is created to help in such 
important scenarios.
+It uses @ref{Fits} to convert the FITS date format into the Unix epoch time 
(number of seconds since 00:00:00 of January 1st, 1970), using the 
@option{--datetosec} option.
+The Unix epoch time is a single number (integer, if not given in sub-second 
precision), enabling easy comparison and sorting of dates after January 1st, 
1970.
 
-You can use this script as a basis for making a much more highly customized
-sorting script. Here are some examples
+You can use this script as a basis for making a much more highly customized 
sorting script.
+Here are some examples
 
 @itemize
 @item
-If you need to copy the files, but only need a single extension (not the
-whole file), you can add a step just before the making of the symbolic
-links, or copies, and change it to only copy a certain extension of the
-FITS file using the Fits program's @option{--copy} option, see @ref{HDU
-manipulation}.
+If you need to copy the files, but only need a single extension (not the whole 
file), you can add a step just before the making of the symbolic links, or 
copies, and change it to only copy a certain extension of the FITS file using 
the Fits program's @option{--copy} option, see @ref{HDU manipulation}.
 
 @item
-If you need to classify the files with finer detail (for example the
-purpose of the dataset), you can add a step just before the making of the
-symbolic links, or copies, to specify a file-name prefix based on other
-certain keyword values in the files. For example when the FITS files have a
-keyword to specify if the dataset is a science, bias, or flat-field
-image. You can read it and to add a @code{sci-}, @code{bias-}, or
-@code{flat-} to the created file (after the @option{--prefix})
-automatically.
-
-For example, let's assume the observing mode is stored in the hypothetical
-@code{MODE} keyword, which can have three values of @code{BIAS-IMAGE},
-@code{SCIENCE-IMAGE} and @code{FLAT-EXP}. With the step below, you can
-generate a mode-prefix, and add it to the generated link/copy names (just
-correct the filename and extension of the first line to the script's
-variables):
+If you need to classify the files with finer detail (for example the purpose 
of the dataset), you can add a step just before the making of the symbolic 
links, or copies, to specify a file-name prefix based on other certain keyword 
values in the files.
+For example when the FITS files have a keyword to specify if the dataset is a 
science, bias, or flat-field image.
+You can read it and to add a @code{sci-}, @code{bias-}, or @code{flat-} to the 
created file (after the @option{--prefix}) automatically.
+
+For example, let's assume the observing mode is stored in the hypothetical 
@code{MODE} keyword, which can have three values of @code{BIAS-IMAGE}, 
@code{SCIENCE-IMAGE} and @code{FLAT-EXP}.
+With the step below, you can generate a mode-prefix, and add it to the 
generated link/copy names (just correct the filename and extension of the first 
line to the script's variables):
 
 @example
 modepref=$(astfits infile.fits -h1 \
@@ -10884,26 +8080,18 @@ modepref=$(astfits infile.fits -h1 \
 
 @cindex GNU AWK
 @cindex GNU Sed
-Here is a description of it. We first use @command{astfits} to print all
-the keywords in extension @code{1} of @file{infile.fits}. In the FITS
-standard, string values (that we are assuming here) are placed in single
-quotes (@key{'}) which are annoying in this context/use-case. Therefore, we
-pipe the output of @command{astfits} into @command{sed} to remove all such
-quotes (substituting them with a blank space). The result is then piped to
-AWK for giving us the final mode-prefix: with @code{$1=="MODE"}, we ask AWK
-to only consider the line where the first column is @code{MODE}. There is
-an equal sign between the key name and value, so the value is the third
-column (@code{$3} in AWK). We thus use a simple @code{if-else} structure to
-look into this value and print our custom prefix based on it. The output of
-AWK is then stored in the @code{modepref} shell variable which you can add
-to the link/copy name.
-
-With the solution above, the increment of the file counter for each night
-will be independent of the mode. If you want the counter to be
-mode-dependent, you can add a different counter for each mode and use that
-counter instead of the generic counter for each night (based on the value
-of @code{modepref}). But we'll leave the implementation of this step to you
-as an exercise.
+Here is a description of it.
+We first use @command{astfits} to print all the keywords in extension @code{1} 
of @file{infile.fits}.
+In the FITS standard, string values (that we are assuming here) are placed in 
single quotes (@key{'}) which are annoying in this context/use-case.
+Therefore, we pipe the output of @command{astfits} into @command{sed} to 
remove all such quotes (substituting them with a blank space).
+The result is then piped to AWK for giving us the final mode-prefix: with 
@code{$1=="MODE"}, we ask AWK to only consider the line where the first column 
is @code{MODE}.
+There is an equal sign between the key name and value, so the value is the 
third column (@code{$3} in AWK).
+We thus use a simple @code{if-else} structure to look into this value and 
print our custom prefix based on it.
+The output of AWK is then stored in the @code{modepref} shell variable which 
you can add to the link/copy name.
+
+With the solution above, the increment of the file counter for each night will 
be independent of the mode.
+If you want the counter to be mode-dependent, you can add a different counter 
for each mode and use that counter instead of the generic counter for each 
night (based on the value of @code{modepref}).
+But we'll leave the implementation of this step to you as an exercise.
 
 @end itemize
 
@@ -10914,10 +8102,9 @@ as an exercise.
 @node Invoking astscript-sort-by-night,  , Sort FITS files by night, Sort FITS 
files by night
 @subsection Invoking astscript-sort-by-night
 
-This installed script will read a FITS date formatted value from the given
-keyword, and classify the input FITS files into individual nights. For more
-on installed scripts please see (see @ref{Installed scripts}). This script
-can be used with the following general template:
+This installed script will read a FITS date formatted value from the given 
keyword, and classify the input FITS files into individual nights.
+For more on installed scripts please see (see @ref{Installed scripts}).
+This script can be used with the following general template:
 
 @example
 $ astscript-sort-by-night [OPTION...] FITS-files
@@ -10934,25 +8121,15 @@ $ astscript-sort-by-night --key=DATE-OBS 
/path/to/data/*.fits
 $ astscript-sort-by-night --link --prefix=img- /path/to/data/*.fits
 @end example
 
-This script will look into a HDU/extension (@option{--hdu}) for a keyword
-(@option{--key}) in the given FITS files and interpret the value as a
-date. The inputs will be separated by "night"s (9:00a.m to next day's
-8:59:59a.m, spanning two calendar days, exact hour can be set with
-@option{--hour}).
+This script will look into a HDU/extension (@option{--hdu}) for a keyword 
(@option{--key}) in the given FITS files and interpret the value as a date.
+The inputs will be separated by "night"s (9:00a.m to next day's 8:59:59a.m, 
spanning two calendar days, exact hour can be set with @option{--hour}).
 
-The default output is a list of all the input files along with the
-following two columns: night number and file number in that night (sorted
-by time). With @option{--link} a symbolic link will be made (one for each
-input) that contains the night number, and number of file in that night
-(sorted by time), see the description of @option{--link} for more. When
-@option{--copy} is used instead of a link, a copy of the inputs will be
-made instead of symbolic link.
+The default output is a list of all the input files along with the following 
two columns: night number and file number in that night (sorted by time).
+With @option{--link} a symbolic link will be made (one for each input) that 
contains the night number, and number of file in that night (sorted by time), 
see the description of @option{--link} for more.
+When @option{--copy} is used instead of a link, a copy of the inputs will be 
made instead of symbolic link.
 
-Below you can see one example where all the @file{target-*.fits} files in
-the @file{data} directory should be separated by observing night according
-to the @code{DATE-OBS} keyword value in their second extension (number
-@code{1}, recall that HDU counting starts from 0). You can see the output
-after the @code{ls} command.
+Below you can see one example where all the @file{target-*.fits} files in the 
@file{data} directory should be separated by observing night according to the 
@code{DATE-OBS} keyword value in their second extension (number @code{1}, 
recall that HDU counting starts from 0).
+You can see the output after the @code{ls} command.
 
 @example
 $ astscript-sort-by-night -pimg- -h1 -kDATE-OBS data/target-*.fits
@@ -10960,21 +8137,16 @@ $ ls
 img-n1-1.fits img-n1-2.fits img-n2-1.fits ...
 @end example
 
-The outputs can be placed in a different (already existing) directory by
-including that directory's name in the @option{--prefix} value, for example
-@option{--prefix=sorted/img-} will put them all under the @file{sorted}
-directory.
+The outputs can be placed in a different (already existing) directory by 
including that directory's name in the @option{--prefix} value, for example 
@option{--prefix=sorted/img-} will put them all under the @file{sorted} 
directory.
 
-This script can be configured like all Gnuastro's programs (through
-command-line options, see @ref{Common options}), with some minor
-differences that are described in @ref{Installed scripts}. The particular
-options to this script are listed below:
+This script can be configured like all Gnuastro's programs (through 
command-line options, see @ref{Common options}), with some minor differences 
that are described in @ref{Installed scripts}.
+The particular options to this script are listed below:
 
 @table @option
 @item -h STR
 @itemx --hdu=STR
-The HDU/extension to use in all the given FITS files. All of the given FITS
-files must have this extension.
+The HDU/extension to use in all the given FITS files.
+All of the given FITS files must have this extension.
 
 @item -k STR
 @itemx --key=STR
@@ -10982,37 +8154,31 @@ The keyword name that contains the FITS date format to 
classify/sort by.
 
 @item -H FLT
 @itemx --hour=FLT
-The hour that defines the next ``night''. By default, all times before
-9:00a.m are considered to belong to the previous calendar night. If a
-sub-hour value is necessary, it should be given in units of hours, for
-example @option{--hour=9.5} corresponds to 9:30a.m.
+The hour that defines the next ``night''.
+By default, all times before 9:00a.m are considered to belong to the previous 
calendar night.
+If a sub-hour value is necessary, it should be given in units of hours, for 
example @option{--hour=9.5} corresponds to 9:30a.m.
 
 @cartouche
 @noindent
 @cindex Time zone
 @cindex UTC (Universal time coordinate)
 @cindex Universal time coordinate (UTC)
-@strong{Dealing with time zones:} the time that is recorded in
-@option{--key} may be in UTC (Universal Time Coordinate). However, the
-organization of the images taken during the night depends on the local
-time. It is possible to take this into account by setting the
-@option{--hour} option to the local time in UTC.
-
-For example, consider a set of images taken in Auckland (New Zealand,
-UTC+12) during different nights. If you want to classify these images by
-night, you have to know at which time (in UTC time) the Sun rises (or any
-other separator/definition of a different night). In this particular
-example, you can use @option{--hour=21}. Because in Auckland, a night
-finishes (roughly) at the local time of 9:00, which corresponds to 21:00
-UTC.
+@strong{Dealing with time zones:}
+The time that is recorded in @option{--key} may be in UTC (Universal Time 
Coordinate).
+However, the organization of the images taken during the night depends on the 
local time.
+It is possible to take this into account by setting the @option{--hour} option 
to the local time in UTC.
+
+For example, consider a set of images taken in Auckland (New Zealand, UTC+12) 
during different nights.
+If you want to classify these images by night, you have to know at which time 
(in UTC time) the Sun rises (or any other separator/definition of a different 
night).
+In this particular example, you can use @option{--hour=21}.
+Because in Auckland, a night finishes (roughly) at the local time of 9:00, 
which corresponds to 21:00 UTC.
 @end cartouche
 
 @item -l
 @itemx --link
-Create a symbolic link for each input FITS file. This option cannot be used
-with @option{--copy}. The link will have a standard name in the following
-format (variable parts are written in @code{CAPITAL} letters and described
-after it):
+Create a symbolic link for each input FITS file.
+This option cannot be used with @option{--copy}.
+The link will have a standard name in the following format (variable parts are 
written in @code{CAPITAL} letters and described after it):
 
 @example
 PnN-I.fits
@@ -11020,38 +8186,31 @@ PnN-I.fits
 
 @table @code
 @item P
-This is the value given to @option{--prefix}. By default, its value is
-@code{./} (to store the links in the directory this script was run in). See
-the description of @code{--prefix} for more.
+This is the value given to @option{--prefix}.
+By default, its value is @code{./} (to store the links in the directory this 
script was run in).
+See the description of @code{--prefix} for more.
 @item N
-This is the night-counter: starting from 1. @code{N} is just incremented by
-1 for the next night, no matter how many nights (without any dataset) there
-are between two subsequent observing nights (its just an identifier for
-each night which you can easily map to different calendar nights).
+This is the night-counter: starting from 1.
+@code{N} is just incremented by 1 for the next night, no matter how many 
nights (without any dataset) there are between two subsequent observing nights 
(its just an identifier for each night which you can easily map to different 
calendar nights).
 @item I
 File counter in that night, sorted by time.
 @end table
 
 @item -c
 @itemx --copy
-Make a copy of each input FITS file with the standard naming convention
-described in @option{--link}. With this option, instead of making a link, a
-copy is made. This option cannot be used with @option{--link}.
+Make a copy of each input FITS file with the standard naming convention 
described in @option{--link}.
+With this option, instead of making a link, a copy is made.
+This option cannot be used with @option{--link}.
 
 @item -p STR
 @itemx --prefix=STR
-Prefix to append before the night-identifier of each newly created link or
-copy. This option is thus only relevant with the @option{--copy} or
-@option{--link} options. See the description of @option{--link} for how its
-used. For example, with @option{--prefix=img-}, all the created file names
-in the current directory will start with @code{img-}, making outputs like
-@file{img-n1-1.fits} or @file{img-n3-42.fits}.
-
-@option{--prefix} can also be used to store the links/copies in another
-directory relative to the directory this script is being run (it must
-already exist). For example @code{--prefix=/path/to/processing/img-} will
-put all the links/copies in the @file{/path/to/processing} directory, and
-the files (in that directory) will all start with @file{img-}.
+Prefix to append before the night-identifier of each newly created link or 
copy.
+This option is thus only relevant with the @option{--copy} or @option{--link} 
options.
+See the description of @option{--link} for how its used.
+For example, with @option{--prefix=img-}, all the created file names in the 
current directory will start with @code{img-}, making outputs like 
@file{img-n1-1.fits} or @file{img-n3-42.fits}.
+
+@option{--prefix} can also be used to store the links/copies in another 
directory relative to the directory this script is being run (it must already 
exist).
+For example @code{--prefix=/path/to/processing/img-} will put all the 
links/copies in the @file{/path/to/processing} directory, and the files (in 
that directory) will all start with @file{img-}.
 @end table
 
 
@@ -11079,24 +8238,18 @@ the files (in that directory) will all start with 
@file{img-}.
 @cindex Image format conversion
 @cindex Converting image formats
 @pindex @r{ConvertType (}astconvertt@r{)}
-The FITS format used in astronomy was defined mainly for archiving,
-transmission, and processing. In other situations, the data might be useful
-in other formats. For example, when you are writing a paper or report, or
-if you are making slides for a talk, you can't use a FITS image. Other
-image formats should be used. In other cases you might want your pixel
-values in a table format as plain text for input to other programs that
-don't recognize FITS. ConvertType is created for such situations. The
-various types will increase with future updates and based on need.
-
-The conversion is not only one way (from FITS to other formats), but two
-ways (except the EPS and PDF formats@footnote{Because EPS and PDF are
-vector, not raster/pixelated formats}). So you can also convert a JPEG
-image or text file into a FITS image. Basically, other than EPS/PDF, you
-can use any of the recognized formats as different color channel inputs to
-get any of the recognized outputs. So before explaining the options and
-arguments (in @ref{Invoking astconvertt}), we'll start with a short
-description of the recognized files types in @ref{Recognized file formats},
-followed a short introduction to digital color in @ref{Color}.
+The FITS format used in astronomy was defined mainly for archiving, 
transmission, and processing.
+In other situations, the data might be useful in other formats.
+For example, when you are writing a paper or report, or if you are making 
slides for a talk, you can't use a FITS image.
+Other image formats should be used.
+In other cases you might want your pixel values in a table format as plain 
text for input to other programs that don't recognize FITS.
+ConvertType is created for such situations.
+The various types will increase with future updates and based on need.
+
+The conversion is not only one way (from FITS to other formats), but two ways 
(except the EPS and PDF formats@footnote{Because EPS and PDF are vector, not 
raster/pixelated formats}).
+So you can also convert a JPEG image or text file into a FITS image.
+Basically, other than EPS/PDF, you can use any of the recognized formats as 
different color channel inputs to get any of the recognized outputs.
+So before explaining the options and arguments (in @ref{Invoking 
astconvertt}), we'll start with a short description of the recognized files 
types in @ref{Recognized file formats}, followed a short introduction to 
digital color in @ref{Color}.
 
 @menu
 * Recognized file formats::     Recognized file formats
@@ -11107,59 +8260,42 @@ followed a short introduction to digital color in 
@ref{Color}.
 @node Recognized file formats, Color, ConvertType, ConvertType
 @subsection Recognized file formats
 
-The various standards and the file name extensions recognized by
-ConvertType are listed below. Currently Gnuastro uses the file name's
-suffix to identify the format.
+The various standards and the file name extensions recognized by ConvertType 
are listed below.
+Currently Gnuastro uses the file name's suffix to identify the format.
 
 @table @asis
 @item FITS or IMH
 @cindex IRAF
 @cindex Astronomical data format
-Astronomical data are commonly stored in the FITS format (or the older data
-IRAF @file{.imh} format), a list of file name suffixes which indicate that
-the file is in this format is given in @ref{Arguments}.
+Astronomical data are commonly stored in the FITS format (or the older data 
IRAF @file{.imh} format), a list of file name suffixes which indicate that the 
file is in this format is given in @ref{Arguments}.
 
-Each image extension of a FITS file only has one value per
-pixel/element. Therefore, when used as input, each input FITS image
-contributes as one color channel. If you want multiple extensions in one
-FITS file for different color channels, you have to repeat the file name
-multiple times and use the @option{--hdu}, @option{--hdu2}, @option{--hdu3}
-or @option{--hdu4} options to specify the different extensions.
+Each image extension of a FITS file only has one value per pixel/element.
+Therefore, when used as input, each input FITS image contributes as one color 
channel.
+If you want multiple extensions in one FITS file for different color channels, 
you have to repeat the file name multiple times and use the @option{--hdu}, 
@option{--hdu2}, @option{--hdu3} or @option{--hdu4} options to specify the 
different extensions.
 
 @item JPEG
 @cindex JPEG format
 @cindex Raster graphics
 @cindex Pixelated graphics
-The JPEG standard was created by the Joint photographic experts
-group. It is currently one of the most commonly used image
-formats. Its major advantage is the compression algorithm that is
-defined by the standard. Like the FITS standard, this is a raster
-graphics format, which means that it is pixelated.
-
-A JPEG file can have 1 (for gray-scale), 3 (for RGB) and 4 (for CMYK)
-color channels. If you only want to convert one JPEG image into other
-formats, there is no problem, however, if you want to use it in
-combination with other input files, make sure that the final number of
-color channels does not exceed four. If it does, then ConvertType will
-abort and notify you.
+The JPEG standard was created by the Joint photographic experts group.
+It is currently one of the most commonly used image formats.
+Its major advantage is the compression algorithm that is defined by the 
standard.
+Like the FITS standard, this is a raster graphics format, which means that it 
is pixelated.
+
+A JPEG file can have 1 (for gray-scale), 3 (for RGB) and 4 (for CMYK) color 
channels.
+If you only want to convert one JPEG image into other formats, there is no 
problem, however, if you want to use it in combination with other input files, 
make sure that the final number of color channels does not exceed four.
+If it does, then ConvertType will abort and notify you.
 
 @cindex Suffixes, JPEG images
-The file name endings that are recognized as a JPEG file for input
-are: @file{.jpg}, @file{.JPG}, @file{.jpeg}, @file{.JPEG},
-@file{.jpe}, @file{.jif}, @file{.jfif} and @file{.jfi}.
+The file name endings that are recognized as a JPEG file for input are:
+@file{.jpg}, @file{.JPG}, @file{.jpeg}, @file{.JPEG}, @file{.jpe}, 
@file{.jif}, @file{.jfif} and @file{.jfi}.
 
 @item TIFF
 @cindex TIFF format
-TIFF (or Tagged Image File Format) was originally designed as a common
-format for scanners in the early 90s and since then it has grown to become
-very general. In many aspects, the TIFF standard is similar to the FITS
-image standard: it can allow data of many types (see @ref{Numeric data
-types}), and also allows multiple images to be stored in a single file
-(each image in the file is called a `directory' in the TIFF
-standard). However, unlike FITS, it can only store images, it has no
-constructs for tables. Another (inconvenient) difference with the FITS
-standard is that keyword names are stored as numbers, not human-readable
-text.
+TIFF (or Tagged Image File Format) was originally designed as a common format 
for scanners in the early 90s and since then it has grown to become very 
general.
+In many aspects, the TIFF standard is similar to the FITS image standard: it 
can allow data of many types (see @ref{Numeric data types}), and also allows 
multiple images to be stored in a single file (each image in the file is called 
a `directory' in the TIFF standard).
+However, unlike FITS, it can only store images, it has no constructs for 
tables.
+Another (inconvenient) difference with the FITS standard is that keyword names 
are stored as numbers, not human-readable text.
 
 However, outside of astronomy, because of its support of different numeric
 data types, many fields use TIFF images for accurate (for example 16-bit
@@ -11173,13 +8309,11 @@ writing TIFF images, please get in touch with us.
 @cindex PostScript
 @cindex Vector graphics
 @cindex Encapsulated PostScript
-The Encapsulated PostScript (EPS) format is essentially a one page
-PostScript file which has a specified size. PostScript also includes
-non-image data, for example lines and texts. It is a fully functional
-programming language to describe a document. Therefore in ConvertType,
-EPS is only an output format and cannot be used as input. Contrary to
-the FITS or JPEG formats, PostScript is not a raster format, but is
-categorized as vector graphics.
+The Encapsulated PostScript (EPS) format is essentially a one page PostScript 
file which has a specified size.
+PostScript also includes non-image data, for example lines and texts.
+It is a fully functional programming language to describe a document.
+Therefore in ConvertType, EPS is only an output format and cannot be used as 
input.
+Contrary to the FITS or JPEG formats, PostScript is not a raster format, but 
is categorized as vector graphics.
 
 @cindex PDF
 @cindex Adobe systems
@@ -11187,106 +8321,71 @@ categorized as vector graphics.
 @cindex Compiled PostScript
 @cindex Portable Document format
 @cindex Static document description format
-The Portable Document Format (PDF) is currently the most common format
-for documents. Some believe that PDF has replaced PostScript and that
-PostScript is now obsolete. This view is wrong, a PostScript file is
-an actual plain text file that can be edited like any program source
-with any text editor. To be able to display its programmed content or
-print, it needs to pass through a processor or compiler. A PDF file
-can be thought of as the processed output of the compiler on an input
-PostScript file. PostScript, EPS and PDF were created and are
-registered by Adobe Systems.
+The Portable Document Format (PDF) is currently the most common format for 
documents.
+Some believe that PDF has replaced PostScript and that PostScript is now 
obsolete.
+This view is wrong, a PostScript file is an actual plain text file that can be 
edited like any program source with any text editor.
+To be able to display its programmed content or print, it needs to pass 
through a processor or compiler.
+A PDF file can be thought of as the processed output of the compiler on an 
input PostScript file.
+PostScript, EPS and PDF were created and are registered by Adobe Systems.
 
 @cindex @TeX{}
 @cindex @LaTeX{}
-With these features in mind, you can see that when you are compiling a
-document with @TeX{} or @LaTeX{}, using an EPS file is much more low
-level than a JPEG and thus you have much greater control and therefore
-quality. Since it also includes vector graphic lines we also use such
-lines to make a thin border around the image to make its appearance in
-the document much better. No matter the resolution of the display or
-printer, these lines will always be clear and not pixelated. In the
-future, addition of text might be included (for example labels or
-object IDs) on the EPS output. However, this can be done better with
-tools within @TeX{} or @LaTeX{} such as
-PGF/Tikz@footnote{@url{http://sourceforge.net/projects/pgf/}}.
+With these features in mind, you can see that when you are compiling a 
document with @TeX{} or @LaTeX{}, using an EPS file is much more low level than 
a JPEG and thus you have much greater control and therefore quality.
+Since it also includes vector graphic lines we also use such lines to make a 
thin border around the image to make its appearance in the document much better.
+No matter the resolution of the display or printer, these lines will always be 
clear and not pixelated.
+In the future, addition of text might be included (for example labels or 
object IDs) on the EPS output.
+However, this can be done better with tools within @TeX{} or @LaTeX{} such as 
PGF/Tikz@footnote{@url{http://sourceforge.net/projects/pgf/}}.
 
 @cindex Binary image
 @cindex Saving binary image
 @cindex Black and white image
-If the final input image (possibly after all operations on the flux
-explained below) is a binary image or only has two colors of black and
-white (in segmentation maps for example), then PostScript has another
-great advantage compared to other formats. It allows for 1 bit pixels
-(pixels with a value of 0 or 1), this can decrease the output file
-size by 8 times. So if a gray-scale image is binary, ConvertType will
-exploit this property in the EPS and PDF (see below) outputs.
+If the final input image (possibly after all operations on the flux explained 
below) is a binary image or only has two colors of black and white (in 
segmentation maps for example), then PostScript has another great advantage 
compared to other formats.
+It allows for 1 bit pixels (pixels with a value of 0 or 1), this can decrease 
the output file size by 8 times.
+So if a gray-scale image is binary, ConvertType will exploit this property in 
the EPS and PDF (see below) outputs.
 
 @cindex Suffixes, EPS format
-The standard formats for an EPS file are @file{.eps}, @file{.EPS},
-@file{.epsf} and @file{.epsi}. The EPS outputs of ConvertType have the
-@file{.eps} suffix.
+The standard formats for an EPS file are @file{.eps}, @file{.EPS}, 
@file{.epsf} and @file{.epsi}.
+The EPS outputs of ConvertType have the @file{.eps} suffix.
 
 @item PDF
 @cindex Suffixes, PDF format
 @cindex GPL Ghostscript
-As explained above, a PDF document is a static document description
-format, viewing its result is therefore much faster and more efficient
-than PostScript. To create a PDF output, ConvertType will make a
-PostScript page description and convert that to PDF using GPL
-Ghostscript. The suffixes recognized for a PDF file are: @file{.pdf},
-@file{.PDF}. If GPL Ghostscript cannot be run on the PostScript file,
-it will remain and a warning will be printed.
+As explained above, a PDF document is a static document description format, 
viewing its result is therefore much faster and more efficient than PostScript.
+To create a PDF output, ConvertType will make a PostScript page description 
and convert that to PDF using GPL Ghostscript.
+The suffixes recognized for a PDF file are: @file{.pdf}, @file{.PDF}.
+If GPL Ghostscript cannot be run on the PostScript file, it will remain and a 
warning will be printed.
 
 @item @option{blank}
 @cindex @file{blank} color channel
-This is not actually a file type! But can be used to fill one color
-channel with a blank value. If this argument is given for any color
-channel, that channel will not be used in the output.
+This is not actually a file type! But can be used to fill one color channel 
with a blank value.
+If this argument is given for any color channel, that channel will not be used 
in the output.
 
 @item Plain text
 @cindex Plain text
 @cindex Suffixes, plain text
-Plain text files have the advantage that they can be viewed with any text
-editor or on the command-line. Most programs also support input as plain
-text files. As input, each plain text file is considered to contain one
-color channel.
-
-In ConvertType, the recognized extensions for plain text files are
-@file{.txt} and @file{.dat}. As described in @ref{Invoking astconvertt}, if
-you just give these extensions, (and not a full filename) as output, then
-automatic output will be preformed to determine the final output name (see
-@ref{Automatic output}). Besides these, when the format of a file cannot be
-recognized from its name, ConvertType will fall back to plain text mode. So
-you can use any name (even without an extension) for a plain text input or
-output. Just note that when the suffix is not recognized, automatic output
-will not be preformed.
-
-The basic input/output on plain text images is very similar to how tables
-are read/written as described in @ref{Gnuastro text table format}. Simply
-put, the restrictions are very loose, and there is a convention to define a
-name, units, data type (see @ref{Numeric data types}), and comments for the
-data in a commented line. The only difference is that as a table, a text
-file can contain many datasets (columns), but as a 2D image, it can only
-contain one dataset. As a result, only one information comment line is
-necessary for a 2D image, and instead of the starting `@code{# Column N}'
-(@code{N} is the column number), the information line for a 2D image must
-start with `@code{# Image 1}'. When ConvertType is asked to output to plain
-text file, this information comment line is written before the image pixel
-values.
+Plain text files have the advantage that they can be viewed with any text 
editor or on the command-line.
+Most programs also support input as plain text files.
+As input, each plain text file is considered to contain one color channel.
 
-When converting an image to plain text, consider the fact that if the image
-is large, the number of columns in each line will become very large,
-possibly making it very hard to open in some text editors.
+In ConvertType, the recognized extensions for plain text files are @file{.txt} 
and @file{.dat}.
+As described in @ref{Invoking astconvertt}, if you just give these extensions, 
(and not a full filename) as output, then automatic output will be preformed to 
determine the final output name (see @ref{Automatic output}).
+Besides these, when the format of a file cannot be recognized from its name, 
ConvertType will fall back to plain text mode.
+So you can use any name (even without an extension) for a plain text input or 
output.
+Just note that when the suffix is not recognized, automatic output will not be 
preformed.
+
+The basic input/output on plain text images is very similar to how tables are 
read/written as described in @ref{Gnuastro text table format}.
+Simply put, the restrictions are very loose, and there is a convention to 
define a name, units, data type (see @ref{Numeric data types}), and comments 
for the data in a commented line.
+The only difference is that as a table, a text file can contain many datasets 
(columns), but as a 2D image, it can only contain one dataset.
+As a result, only one information comment line is necessary for a 2D image, 
and instead of the starting `@code{# Column N}' (@code{N} is the column 
number), the information line for a 2D image must start with `@code{# Image 1}'.
+When ConvertType is asked to output to plain text file, this information 
comment line is written before the image pixel values.
+
+When converting an image to plain text, consider the fact that if the image is 
large, the number of columns in each line will become very large, possibly 
making it very hard to open in some text editors.
 
 @item Standard output (command-line)
-This is very similar to the plain text output, but instead of creating a
-file to keep the printed values, they are printed on the command line. This
-can be very useful when you want to redirect the results directly to
-another program in one command with no intermediate file. The only
-difference is that only the pixel values are printed (with no information
-comment line). To print to the standard output, set the output name to
-`@file{stdout}'.
+This is very similar to the plain text output, but instead of creating a file 
to keep the printed values, they are printed on the command line.
+This can be very useful when you want to redirect the results directly to 
another program in one command with no intermediate file.
+The only difference is that only the pixel values are printed (with no 
information comment line).
+To print to the standard output, set the output name to `@file{stdout}'.
 
 @end table
 
@@ -11300,101 +8399,69 @@ comment line). To print to the standard output, set 
the output name to
 @cindex Pixels
 @cindex Colorspace
 @cindex Primary colors
-Color is defined by mixing various measurements/filters. In digital
-monitors or common digital cameras, colors are displayed/stored by mixing
-the three basic colors of red, green and blue (RGB) with various
-proportions. When printing on paper, standard printers use the cyan,
-magenta, yellow and key (CMYK, key=black) color space. In other words, for
-each displayed/printed pixel of a color image, the dataset/image has three
-or four values.
+Color is defined by mixing various measurements/filters.
+In digital monitors or common digital cameras, colors are displayed/stored by 
mixing the three basic colors of red, green and blue (RGB) with various 
proportions.
+When printing on paper, standard printers use the cyan, magenta, yellow and 
key (CMYK, key=black) color space.
+In other words, for each displayed/printed pixel of a color image, the 
dataset/image has three or four values.
 
 @cindex Color channel
 @cindex Channel, color
-To store/show the three values for each pixel, cameras and monitors
-allocate a certain fraction of each pixel's area to red, green and blue
-filters. These three filters are thus built into the hardware at the pixel
-level. However, because measurement accuracy is very important in
-scientific instruments, and we want to do measurements (take images) with
-various/custom filters (without having to order a new expensive detector!),
-scientific detectors use the full area of the pixel to store one value for
-it in a single/mono channel dataset. To make measurements in different
-filters, we just place a filter in the light path before the
-detector. Therefore, the FITS format that is used to store astronomical
-datasets is inherently a mono-channel format (see @ref{Recognized file
-formats} or @ref{Fits}).
-
-When a subject has been imaged in multiple filters, you can feed each
-different filter into the red, green and blue channels and obtain a colored
-visualization. In ConvertType, you can do this by giving each separate
-single-channel dataset (for example in the FITS image format) as an
-argument (in the proper order), then asking for the output in a format that
-supports multi-channel datasets (for example JPEG or PDF, see the examples
-in @ref{Invoking astconvertt}).
+To store/show the three values for each pixel, cameras and monitors allocate a 
certain fraction of each pixel's area to red, green and blue filters.
+These three filters are thus built into the hardware at the pixel level.
+However, because measurement accuracy is very important in scientific 
instruments, and we want to do measurements (take images) with various/custom 
filters (without having to order a new expensive detector!), scientific 
detectors use the full area of the pixel to store one value for it in a 
single/mono channel dataset.
+To make measurements in different filters, we just place a filter in the light 
path before the detector.
+Therefore, the FITS format that is used to store astronomical datasets is 
inherently a mono-channel format (see @ref{Recognized file formats} or 
@ref{Fits}).
+
+When a subject has been imaged in multiple filters, you can feed each 
different filter into the red, green and blue channels and obtain a colored 
visualization.
+In ConvertType, you can do this by giving each separate single-channel dataset 
(for example in the FITS image format) as an argument (in the proper order), 
then asking for the output in a format that supports multi-channel datasets 
(for example JPEG or PDF, see the examples in @ref{Invoking astconvertt}).
 
 @cindex Grayscale
 @cindex Visualization
 @cindex Colorspace, HSV
 @cindex Colorspace, gray-scale
 @cindex HSV: Hue Saturation Value
-As discussed above, color is not defined when a dataset/image contains a
-single value for each pixel. However, we interact with scientific datasets
-through monitors or printers (which allow multiple values per pixel and
-produce color with them). As a result, there is a lot of freedom in
-visualizing a single-channel dataset. The most basic is to use shades of
-black (because of its strong contrast with white). This scheme is called
-grayscale. To help in visualization, more complex mappings can be
-defined. For example, the values can be scaled to a range of 0 to 360 and
-used as the ``Hue'' term of the
-@url{https://en.wikipedia.org/wiki/HSL_and_HSV, Hue-Saturation-Value} (HSV)
-color space (while fixing the ``Saturation'' and ``Value'' terms). In
-ConvertType, you can use the @option{--colormap} option to choose between
-different mappings of mono-channel inputs, see @ref{Invoking astconvertt}.
-
-Since grayscale is a commonly used mapping of single-valued datasets, we'll
-continue with a closer look at how it is stored. One way to represent a
-gray-scale image in different color spaces is to use the same proportions
-of the primary colors in each pixel. This is the common way most FITS image
-viewers work: for each pixel, they fill all the channels with the single
-value. While this is necessary for displaying a dataset, there are
-downsides when storing/saving this type of grayscale visualization (for
-example in a paper).
+As discussed above, color is not defined when a dataset/image contains a 
single value for each pixel.
+However, we interact with scientific datasets through monitors or printers 
(which allow multiple values per pixel and produce color with them).
+As a result, there is a lot of freedom in visualizing a single-channel dataset.
+The most basic is to use shades of black (because of its strong contrast with 
white).
+This scheme is called grayscale.
+To help in visualization, more complex mappings can be defined.
+For example, the values can be scaled to a range of 0 to 360 and used as the 
``Hue'' term of the @url{https://en.wikipedia.org/wiki/HSL_and_HSV, 
Hue-Saturation-Value} (HSV) color space (while fixing the ``Saturation'' and 
``Value'' terms).
+In ConvertType, you can use the @option{--colormap} option to choose between 
different mappings of mono-channel inputs, see @ref{Invoking astconvertt}.
+
+Since grayscale is a commonly used mapping of single-valued datasets, we'll 
continue with a closer look at how it is stored.
+One way to represent a gray-scale image in different color spaces is to use 
the same proportions of the primary colors in each pixel.
+This is the common way most FITS image viewers work: for each pixel, they fill 
all the channels with the single value.
+While this is necessary for displaying a dataset, there are downsides when 
storing/saving this type of grayscale visualization (for example in a paper).
 
 @itemize
 
 @item
-Three (for RGB) or four (for CMYK) values have to be stored for every
-pixel, this makes the output file very heavy (in terms of bytes).
+Three (for RGB) or four (for CMYK) values have to be stored for every pixel, 
this makes the output file very heavy (in terms of bytes).
 
 @item
-If printing, the printing errors of each color channel can make the
-printed image slightly more blurred than it actually is.
+If printing, the printing errors of each color channel can make the printed 
image slightly more blurred than it actually is.
 
 @end itemize
 
 @cindex PNG standard
 @cindex Single channel CMYK
-To solve both these problems when storing grayscale visualization, the best
-way is to save a single-channel dataset into the black channel of the CMYK
-color space. The JPEG standard is the only common standard that accepts
-CMYK color space.
-
-The JPEG and EPS standards set two sizes for the number of bits in each
-channel: 8-bit and 12-bit. The former is by far the most common and is what
-is used in ConvertType. Therefore, each channel should have values between
-0 to @math{2^8-1=255}. From this we see how each pixel in a gray-scale
-image is one byte (8 bits) long, in an RGB image, it is 3 bytes long and in
-CMYK it is 4 bytes long. But thanks to the JPEG compression algorithms,
-when all the pixels of one channel have the same value, that channel is
-compressed to one pixel. Therefore a Grayscale image and a CMYK image that
-has only the K-channel filled are approximately the same file size.
+To solve both these problems when storing grayscale visualization, the best 
way is to save a single-channel dataset into the black channel of the CMYK 
color space.
+The JPEG standard is the only common standard that accepts CMYK color space.
+
+The JPEG and EPS standards set two sizes for the number of bits in each 
channel: 8-bit and 12-bit.
+The former is by far the most common and is what is used in ConvertType.
+Therefore, each channel should have values between 0 to @math{2^8-1=255}.
+From this we see how each pixel in a gray-scale image is one byte (8 bits) 
long, in an RGB image, it is 3 bytes long and in CMYK it is 4 bytes long.
+But thanks to the JPEG compression algorithms, when all the pixels of one 
channel have the same value, that channel is compressed to one pixel.
+Therefore a Grayscale image and a CMYK image that has only the K-channel 
filled are approximately the same file size.
 
 
 @node Invoking astconvertt,  , Color, ConvertType
 @subsection Invoking ConvertType
 
-ConvertType will convert any recognized input file type to any specified
-output type. The executable name is @file{astconvertt} with the following
-general template
+ConvertType will convert any recognized input file type to any specified 
output type.
+The executable name is @file{astconvertt} with the following general template
 
 @example
 $ astconvertt [OPTION...] InputFile [InputFile2] ... [InputFile4]
@@ -11426,65 +8493,43 @@ $ cat 2darray.txt | astconvertt -oimg.fits
 @end example
 
 @noindent
-The output's file format will be interpreted from the value given to the
-@option{--output} option. It can either be given on the command-line or in
-any of the configuration files (see @ref{Configuration files}). Note that
-if the output suffix is not recognized, it will default to plain text
-format, see @ref{Recognized file formats}.
+The output's file format will be interpreted from the value given to the 
@option{--output} option.
+It can either be given on the command-line or in any of the configuration 
files (see @ref{Configuration files}).
+Note that if the output suffix is not recognized, it will default to plain 
text format, see @ref{Recognized file formats}.
 
 @cindex Standard input
-At most four input files (one for each color channel for formats that allow
-it) are allowed in ConvertType. The first input dataset can either be a
-file or come from Standard input (see @ref{Standard input}). The order of
-multiple input files is important. After reading the input file(s) the
-number of color channels in all the inputs will be used to define which
-color space to use for the outputs and how each color channel is
-interpreted.
-
-Some formats can allow more than one color channel (for example in the JPEG
-format, see @ref{Recognized file formats}). If there is one input dataset
-(color channel) the output will be gray-scale, if three input datasets
-(color channels) are given, they are respectively considered to be the red,
-green and blue color channels. Finally, if there are four color channels
-they will be be cyan, magenta, yellow and black (CMYK colors).
-
-The value to @option{--output} (or @option{-o}) can be either a full file
-name or just the suffix of the desired output format. In the former case,
-it will used for the output. In the latter case, the name of the output
-file will be set based on the automatic output guidelines, see
-@ref{Automatic output}. Note that the suffix name can optionally start a
-@file{.} (dot), so for example @option{--output=.jpg} and
-@option{--output=jpg} are equivalent. See @ref{Recognized file formats}
-
-Besides the common set of options explained in @ref{Common options},
-the options to ConvertType can be classified into input, output and
-flux related options. The majority of the options are to do with the
-flux range. Astronomical data usually have a very large dynamic range
-(difference between maximum and minimum value) and different subjects
-might be better demonstrated with a limited flux range.
+At most four input files (one for each color channel for formats that allow 
it) are allowed in ConvertType.
+The first input dataset can either be a file or come from Standard input (see 
@ref{Standard input}).
+The order of multiple input files is important.
+After reading the input file(s) the number of color channels in all the inputs 
will be used to define which color space to use for the outputs and how each 
color channel is interpreted.
+
+Some formats can allow more than one color channel (for example in the JPEG 
format, see @ref{Recognized file formats}).
+If there is one input dataset (color channel) the output will be gray-scale, 
if three input datasets (color channels) are given, they are respectively 
considered to be the red, green and blue color channels.
+Finally, if there are four color channels they will be be cyan, magenta, 
yellow and black (CMYK colors).
+
+The value to @option{--output} (or @option{-o}) can be either a full file name 
or just the suffix of the desired output format.
+In the former case, it will used for the output.
+In the latter case, the name of the output file will be set based on the 
automatic output guidelines, see @ref{Automatic output}.
+Note that the suffix name can optionally start a @file{.} (dot), so for 
example @option{--output=.jpg} and @option{--output=jpg} are equivalent.
+See @ref{Recognized file formats}.
+
+Besides the common set of options explained in @ref{Common options}, the 
options to ConvertType can be classified into input, output and flux related 
options.
+The majority of the options are to do with the flux range.
+Astronomical data usually have a very large dynamic range (difference between 
maximum and minimum value) and different subjects might be better demonstrated 
with a limited flux range.
 
 @noindent
 Input:
 @table @option
 @item -h STR/INT
 @itemx --hdu=STR/INT
-In ConvertType, it is possible to call the HDU option multiple times for
-the different input FITS or TIFF files in the same order that they are
-called on the command-line. Note that in the TIFF standard, one `directory'
-(similar to a FITS HDU) may contain multiple color channels (for example
-when the image is in RGB).
-
-Except for the fact that multiple calls are possible, this option is
-identical to the common @option{--hdu} in @ref{Input output options}. The
-number of calls to this option cannot be less than the number of input FITS
-or TIFF files, but if there are more, the extra HDUs will be ignored, note
-that they will be read in the order described in @ref{Configuration file
-precedence}.
-
-Unlike CFITSIO, libtiff (which is used to read TIFF files) only recognizes
-numbers (counting from zero, similar to CFITSIO) for `directory'
-identification. Hence the concept of names is not defined for the
-directories and the values to this option for TIFF files must be numbers.
+In ConvertType, it is possible to call the HDU option multiple times for the 
different input FITS or TIFF files in the same order that they are called on 
the command-line.
+Note that in the TIFF standard, one `directory' (similar to a FITS HDU) may 
contain multiple color channels (for example when the image is in RGB).
+
+Except for the fact that multiple calls are possible, this option is identical 
to the common @option{--hdu} in @ref{Input output options}.
+The number of calls to this option cannot be less than the number of input 
FITS or TIFF files, but if there are more, the extra HDUs will be ignored, note 
that they will be read in the order described in @ref{Configuration file 
precedence}.
+
+Unlike CFITSIO, libtiff (which is used to read TIFF files) only recognizes 
numbers (counting from zero, similar to CFITSIO) for `directory' identification.
+Hence the concept of names is not defined for the directories and the values 
to this option for TIFF files must be numbers.
 @end table
 
 @noindent
@@ -11493,117 +8538,88 @@ Output:
 
 @item -w FLT
 @itemx --widthincm=FLT
-The width of the output in centimeters. This is only relevant for those
-formats that accept such a width (not plain text for example). For most
-digital purposes, the number of pixels is far more important than the value
-to this parameter because you can adjust the absolute width (in inches or
-centimeters) in your document preparation program.
+The width of the output in centimeters.
+This is only relevant for those formats that accept such a width (not plain 
text for example).
+For most digital purposes, the number of pixels is far more important than the 
value to this parameter because you can adjust the absolute width (in inches or 
centimeters) in your document preparation program.
 
 @item -b INT
 @itemx --borderwidth=INT
 @cindex Border on an image
-The width of the border to be put around the EPS and PDF outputs in units
-of PostScript points. There are 72 or 28.35 PostScript points in an inch or
-centimeter respectively. In other words, there are roughly 3 PostScript
-points in every millimeter. If you are planning on adding a border, its
-significance is highly correlated with the value you give to the
-@option{--widthincm} parameter.
-
-Unfortunately in the document structuring convention of the PostScript
-language, the ``bounding box'' has to be in units of PostScript points
-with no fractions allowed. So the border values only have to be
-specified in integers. To have a final border that is thinner than one
-PostScript point in your document, you can ask for a larger width in
-ConvertType and then scale down the output EPS or PDF file in your
-document preparation program. For example by setting @command{width}
-in your @command{includegraphics} command in @TeX{} or @LaTeX{}. Since
-it is vector graphics, the changes of size have no effect on the
-quality of your output quality (pixels don't get different values).
+The width of the border to be put around the EPS and PDF outputs in units of 
PostScript points.
+There are 72 or 28.35 PostScript points in an inch or centimeter respectively.
+In other words, there are roughly 3 PostScript points in every millimeter.
+If you are planning on adding a border, its significance is highly correlated 
with the value you give to the @option{--widthincm} parameter.
+
+Unfortunately in the document structuring convention of the PostScript 
language, the ``bounding box'' has to be in units of PostScript points with no 
fractions allowed.
+So the border values only have to be specified in integers.
+To have a final border that is thinner than one PostScript point in your 
document, you can ask for a larger width in ConvertType and then scale down the 
output EPS or PDF file in your document preparation program.
+For example by setting @command{width} in your @command{includegraphics} 
command in @TeX{} or @LaTeX{}.
+Since it is vector graphics, the changes of size have no effect on the quality 
of your output quality (pixels don't get different values).
 
 @item -x
 @itemx --hex
 @cindex ASCII85 encoding
 @cindex Hexadecimal encoding
-Use Hexadecimal encoding in creating EPS output. By default the ASCII85
-encoding is used which provides a much better compression ratio. When
-converted to PDF (or included in @TeX{} or @LaTeX{} which is finally saved
-as a PDF file), an efficient binary encoding is used which is far more
-efficient than both of them. The choice of EPS encoding will thus have no
-effect on the final PDF.
-
-So if you want to transfer your EPS files (for example if you want to
-submit your paper to arXiv or journals in PostScript), their storage
-might become important if you have large images or lots of small
-ones. By default ASCII85 encoding is used which offers a much better
-compression ratio (nearly 40 percent) compared to Hexadecimal encoding.
+Use Hexadecimal encoding in creating EPS output.
+By default the ASCII85 encoding is used which provides a much better 
compression ratio.
+When converted to PDF (or included in @TeX{} or @LaTeX{} which is finally 
saved as a PDF file), an efficient binary encoding is used which is far more 
efficient than both of them.
+The choice of EPS encoding will thus have no effect on the final PDF.
+
+So if you want to transfer your EPS files (for example if you want to submit 
your paper to arXiv or journals in PostScript), their storage might become 
important if you have large images or lots of small ones.
+By default ASCII85 encoding is used which offers a much better compression 
ratio (nearly 40 percent) compared to Hexadecimal encoding.
 
 @item -u INT
 @itemx --quality=INT
 @cindex JPEG compression quality
 @cindex Compression quality in JPEG
 @cindex Quality of compression in JPEG
-The quality (compression) of the output JPEG file with values from 0 to 100
-(inclusive). For other formats the value to this option is ignored. Note
-that only in gray-scale (when one input color channel is given) will this
-actually be the exact quality (each pixel will correspond to one input
-value). If it is in color mode, some degradation will occur. While the JPEG
-standard does support loss-less graphics, it is not commonly supported.
+The quality (compression) of the output JPEG file with values from 0 to 100 
(inclusive).
+For other formats the value to this option is ignored.
+Note that only in gray-scale (when one input color channel is given) will this 
actually be the exact quality (each pixel will correspond to one input value).
+If it is in color mode, some degradation will occur.
+While the JPEG standard does support loss-less graphics, it is not commonly 
supported.
 
 @item --colormap=STR[,FLT,...]
-The color map to visualize a single channel. The first value given to this
-option is the name of the color map, which is shown below. Some color maps
-can be configured. In this case, the configuration parameters are
-optionally given as numbers following the name of the color map for example
-see @option{hsv}. The table below contains the usable names of the color
-maps that are currently supported:
+The color map to visualize a single channel.
+The first value given to this option is the name of the color map, which is 
shown below.
+Some color maps can be configured.
+In this case, the configuration parameters are optionally given as numbers 
following the name of the color map for example see @option{hsv}.
+The table below contains the usable names of the color maps that are currently 
supported:
 
 @table @option
 @item gray
 @itemx grey
 @cindex Colorspace, gray-scale
-Grayscale color map. This color map doesn't have any parameters. The full
-dataset range will be scaled to 0 and @mymath{2^8-1=255} to be stored in
-the requested format.
+Grayscale color map.
+This color map doesn't have any parameters.
+The full dataset range will be scaled to 0 and @mymath{2^8-1=255} to be stored 
in the requested format.
 
 @item hsv
 @cindex Colorspace, HSV
 @cindex Hue, saturation, value
 @cindex HSV: Hue Saturation Value
-Hue, Saturation,
-Value@footnote{@url{https://en.wikipedia.org/wiki/HSL_and_HSV}} color
-map. If no values are given after the name (@option{--colormap=hsv}), the
-dataset will be scaled to 0 and 360 for hue covering the full spectrum of
-colors. However, you can limit the range of hue (to show only a special
-color range) by explicitly requesting them after the name (for example
-@option{--colormap=hsv,20,240}).
-
-The mapping of a single-channel dataset to HSV is done through the Hue and
-Value elements: Lower dataset elements have lower ``value'' @emph{and}
-lower ``hue''. This creates darker colors for fainter parts, while also
-respecting the range of colors.
+Hue, Saturation, 
Value@footnote{@url{https://en.wikipedia.org/wiki/HSL_and_HSV}} color map.
+If no values are given after the name (@option{--colormap=hsv}), the dataset 
will be scaled to 0 and 360 for hue covering the full spectrum of colors.
+However, you can limit the range of hue (to show only a special color range) 
by explicitly requesting them after the name (for example 
@option{--colormap=hsv,20,240}).
+
+The mapping of a single-channel dataset to HSV is done through the Hue and 
Value elements: Lower dataset elements have lower ``value'' @emph{and} lower 
``hue''.
+This creates darker colors for fainter parts, while also respecting the range 
of colors.
 
 @item sls
 @cindex DS9
 @cindex SAO DS9
 @cindex SLS Color
 @cindex Colorspace: SLS
-The SLS color range, taken from the commonly used
-@url{http://ds9.si.edu,SAO DS9}. The advantage of this color range is that
-it ranges from black to dark blue, and finishes with red and white. So
-unlike the HSV color range, it includes black and white and brighter colors
-(like yellow, red and white) show the larger values.
+The SLS color range, taken from the commonly used @url{http://ds9.si.edu,SAO 
DS9}.
+The advantage of this color range is that it ranges from black to dark blue, 
and finishes with red and white.
+So unlike the HSV color range, it includes black and white and brighter colors 
(like yellow, red and white) show the larger values.
 @end table
 
 @item --rgbtohsv
-When there are three input channels and the output is in the FITS format,
-interpret the three input channels as red, green and blue channels (RGB)
-and convert them to the hue, saturation, value (HSV) color space.
+When there are three input channels and the output is in the FITS format, 
interpret the three input channels as red, green and blue channels (RGB) and 
convert them to the hue, saturation, value (HSV) color space.
 
-The currently supported output formats of ConvertType don't have native
-support for HSV. Therefore this option is only supported when the output is
-in FITS format and each of the hue, saturation and value arrays can be
-saved as one FITS extension in the output for further analysis (for example
-to select a certain color).
+The currently supported output formats of ConvertType don't have native 
support for HSV.
+Therefore this option is only supported when the output is in FITS format and 
each of the hue, saturation and value arrays can be saved as one FITS extension 
in the output for further analysis (for example to select a certain color).
 
 @end table
 
@@ -11615,22 +8631,15 @@ Flux range:
 @item -c STR
 @itemx --change=STR
 @cindex Change converted pixel values
-(@option{=STR}) Change pixel values with the following format
-@option{"from1:to1, from2:to2,..."}. This option is very useful in
-displaying labeled pixels (not actual data images which have noise)
-like segmentation maps. In labeled images, usually a group of pixels
-have a fixed integer value. With this option, you can manipulate the
-labels before the image is displayed to get a better output for print
-or to emphasize on a particular set of labels and ignore the rest. The
-labels in the images will be changed in the same order given. By
-default first the pixel values will be converted then the pixel values
-will be truncated (see @option{--fluxlow} and @option{--fluxhigh}).
-
-You can use any number for the values irrespective of your final
-output, your given values are stored and used in the double precision
-floating point format. So for example if your input image has labels
-from 1 to 20000 and you only want to display those with labels 957 and
-11342 then you can run ConvertType with these options:
+(@option{=STR}) Change pixel values with the following format 
@option{"from1:to1, from2:to2,..."}.
+This option is very useful in displaying labeled pixels (not actual data 
images which have noise) like segmentation maps.
+In labeled images, usually a group of pixels have a fixed integer value.
+With this option, you can manipulate the labels before the image is displayed 
to get a better output for print or to emphasize on a particular set of labels 
and ignore the rest.
+The labels in the images will be changed in the same order given.
+By default first the pixel values will be converted then the pixel values will 
be truncated (see @option{--fluxlow} and @option{--fluxhigh}).
+
+You can use any number for the values irrespective of your final output, your 
given values are stored and used in the double precision floating point format.
+So for example if your input image has labels from 1 to 20000 and you only 
want to display those with labels 957 and 11342 then you can run ConvertType 
with these options:
 
 @example
 $ astconvertt --change=957:50000,11342:50001 --fluxlow=5e4 \
@@ -11638,24 +8647,19 @@ $ astconvertt --change=957:50000,11342:50001 
--fluxlow=5e4 \
 @end example
 
 @noindent
-While the output JPEG format is only 8 bit, this operation is done in
-an intermediate step which is stored in double precision floating
-point. The pixel values are converted to 8-bit after all operations on
-the input fluxes have been complete. By placing the value in double
-quotes you can use as many spaces as you like for better readability.
+While the output JPEG format is only 8 bit, this operation is done in an 
intermediate step which is stored in double precision floating point.
+The pixel values are converted to 8-bit after all operations on the input 
fluxes have been complete.
+By placing the value in double quotes you can use as many spaces as you like 
for better readability.
 
 @item -C
 @itemx --changeaftertrunc
-Change pixel values (with @option{--change}) after truncation of the
-flux values, by default it is the opposite.
+Change pixel values (with @option{--change}) after truncation of the flux 
values, by default it is the opposite.
 
 @item -L FLT
 @itemx --fluxlow=FLT
-The minimum flux (pixel value) to display in the output image, any pixel
-value below this value will be set to this value in the output. If the
-value to this option is the same as @option{--fluxhigh}, then no flux
-truncation will be applied. Note that when multiple channels are given,
-this value is used for all the color channels.
+The minimum flux (pixel value) to display in the output image, any pixel value 
below this value will be set to this value in the output.
+If the value to this option is the same as @option{--fluxhigh}, then no flux 
truncation will be applied.
+Note that when multiple channels are given, this value is used for all the 
color channels.
 
 @item -H FLT
 @itemx --fluxhigh=FLT
@@ -11664,33 +8668,24 @@ The maximum flux (pixel value) to display in the 
output image, see
 
 @item -m INT
 @itemx --maxbyte=INT
-This is only used for the JPEG and EPS output formats which have an 8-bit
-space for each channel of each pixel. The maximum value in each pixel can
-therefore be @mymath{2^8-1=255}. With this option you can change (decrease)
-the maximum value. By doing so you will decrease the dynamic range. It can
-be useful if you plan to use those values for other purposes.
+This is only used for the JPEG and EPS output formats which have an 8-bit 
space for each channel of each pixel.
+The maximum value in each pixel can therefore be @mymath{2^8-1=255}.
+With this option you can change (decrease) the maximum value.
+By doing so you will decrease the dynamic range.
+It can be useful if you plan to use those values for other purposes.
 
 @item -A INT
 @itemx --forcemin=INT
-Enforce the value of @option{--fluxlow} (when its given), even if its
-smaller than the minimum of the dataset and the output is format supporting
-color. This is particularly useful when you are converting a number of
-images to a common image format like JPEG or PDF with a single command and
-want them all to have the same range of colors, independent of the contents
-of the dataset. Note that if the minimum value is smaller than
-@option{--fluxlow}, then this option is redundant.
+Enforce the value of @option{--fluxlow} (when its given), even if its smaller 
than the minimum of the dataset and the output is format supporting color.
+This is particularly useful when you are converting a number of images to a 
common image format like JPEG or PDF with a single command and want them all to 
have the same range of colors, independent of the contents of the dataset.
+Note that if the minimum value is smaller than @option{--fluxlow}, then this 
option is redundant.
 
 @cindex PDF
 @cindex EPS
 @cindex PostScript
-By default, when the dataset only has two values, @emph{and} the output
-format is PDF or EPS, ConvertType will use the PostScript optimization that
-allows setting the pixel values per bit, not byte (@ref{Recognized file
-formats}). This can greatly help reduce the file size. However, when
-@option{--fluxlow} or @option{--fluxhigh} are called, this optimization is
-disabeled: even though there are only two values (is binary), the
-difference between them does not correspond to the full contrast of black
-and white.
+By default, when the dataset only has two values, @emph{and} the output format 
is PDF or EPS, ConvertType will use the PostScript optimization that allows 
setting the pixel values per bit, not byte (@ref{Recognized file formats}).
+This can greatly help reduce the file size.
+However, when @option{--fluxlow} or @option{--fluxhigh} are called, this 
optimization is disabled: even though there are only two values (is binary), 
the difference between them does not correspond to the full contrast of black 
and white.
 
 @item -B INT
 @itemx --forcemax=INT
@@ -11699,67 +8694,43 @@ Similar to @option{--forcemin}, but for the maximum.
 
 @item -i
 @itemx --invert
-For 8-bit output types (JPEG, EPS, and PDF for example) the final value
-that is stored is inverted so white becomes black and vice versa. The
-reason for this is that astronomical images usually have a very large area
-of blank sky in them. The result will be that a large are of the image will
-be black. Note that this behavior is ideal for gray-scale images, if you
-want a color image, the colors are going to be mixed up.
+For 8-bit output types (JPEG, EPS, and PDF for example) the final value that 
is stored is inverted so white becomes black and vice versa.
+The reason for this is that astronomical images usually have a very large area 
of blank sky in them.
+The result will be that a large are of the image will be black.
+Note that this behavior is ideal for gray-scale images, if you want a color 
image, the colors are going to be mixed up.
 
 @end table
 
 @node Table,  , ConvertType, Data containers
 @section Table
 
-Tables are the products of processing astronomical images and spectra. For
-example in Gnuastro, MakeCatalog will process the defined pixels over an
-object and produce a catalog (see @ref{MakeCatalog}). For each identified
-object, MakeCatalog can print its position on the image or sky, its total
-brightness and many other information that is deducible from the given
-image. Each one of these properties is a column in its output catalog (or
-table) and for each input object, we have a row.
-
-When there are only a small number of objects (rows) and not too many
-properties (columns), then a simple plain text file is mainly enough to
-store, transfer, or even use the produced data. However, to be more
-efficient in all these aspects, astronomers have defined the FITS binary
-table standard to store data in a binary (0 and 1) format, not plain
-text. This can offer major advantages in all those aspects: the file size
-will be greatly reduced and the reading and writing will be faster (because
-the RAM and CPU also work in binary).
-
-The FITS standard also defines a standard for ASCII tables, where the data
-are stored in the human readable ASCII format, but within the FITS file
-structure. These are mainly useful for keeping ASCII data along with images
-and possibly binary data as multiple (conceptually related) extensions
-within a FITS file. The acceptable table formats are fully described in
-@ref{Tables}.
+Tables are the products of processing astronomical images and spectra.
+For example in Gnuastro, MakeCatalog will process the defined pixels over an 
object and produce a catalog (see @ref{MakeCatalog}).
+For each identified object, MakeCatalog can print its position on the image or 
sky, its total brightness and many other information that is deducible from the 
given image.
+Each one of these properties is a column in its output catalog (or table) and 
for each input object, we have a row.
+
+When there are only a small number of objects (rows) and not too many 
properties (columns), then a simple plain text file is mainly enough to store, 
transfer, or even use the produced data.
+However, to be more efficient in all these aspects, astronomers have defined 
the FITS binary table standard to store data in a binary (0 and 1) format, not 
plain text.
+This can offer major advantages in all those aspects: the file size will be 
greatly reduced and the reading and writing will be faster (because the RAM and 
CPU also work in binary).
+
+The FITS standard also defines a standard for ASCII tables, where the data are 
stored in the human readable ASCII format, but within the FITS file structure.
+These are mainly useful for keeping ASCII data along with images and possibly 
binary data as multiple (conceptually related) extensions within a FITS file.
+The acceptable table formats are fully described in @ref{Tables}.
 
 @cindex AWK
 @cindex GNU AWK
-Binary tables are not easily readable by human eyes. There is no
-fixed/unified standard on how the zero and ones should be interpreted. The
-Unix-like operating systems have flourished because of a simple fact:
-communication between the various tools is based on human readable
-characters@footnote{In ``The art of Unix programming'', Eric Raymond makes
-this suggestion to programmers: ``When you feel the urge to design a
-complex binary file format, or a complex binary application protocol, it is
-generally wise to lie down until the feeling passes.''. This is a great
-book and strongly recommended, give it a look if you want to truly enjoy
-your work/life in this environment.}. So while the FITS table standards are
-very beneficial for the tools that recognize them, they are hard to use in
-the vast majority of available software. This creates limitations for their
-generic use.
-
-`Table' is Gnuastro's solution to this problem. With Table, FITS tables
-(ASCII or binary) are directly accessible to the Unix-like operating
-systems power-users (those working the command-line or shell, see
-@ref{Command-line interface}). With Table, a FITS table (in binary or ASCII
-formats) is only one command away from AWK (or any other tool you want to
-use). Just like a plain text file that you read with the @command{cat}
-command. You can pipe the output of Table into any other tool for
-higher-level processing, see the examples in @ref{Invoking asttable} for
-some simple examples.
+Binary tables are not easily readable by human eyes.
+There is no fixed/unified standard on how the zero and ones should be 
interpreted.
+The Unix-like operating systems have flourished because of a simple fact: 
communication between the various tools is based on human readable 
characters@footnote{In ``The art of Unix programming'', Eric Raymond makes this 
suggestion to programmers: ``When you feel the urge to design a complex binary 
file format, or a complex binary application protocol, it is generally wise to 
lie down until the feeling passes.''.
+This is a great book and strongly recommended, give it a look if you want to 
truly enjoy your work/life in this environment.}.
+So while the FITS table standards are very beneficial for the tools that 
recognize them, they are hard to use in the vast majority of available software.
+This creates limitations for their generic use.
+
+`Table' is Gnuastro's solution to this problem.
+With Table, FITS tables (ASCII or binary) are directly accessible to the 
Unix-like operating systems power-users (those working the command-line or 
shell, see @ref{Command-line interface}).
+With Table, a FITS table (in binary or ASCII formats) is only one command away 
from AWK (or any other tool you want to use).
+Just like a plain text file that you read with the @command{cat} command.
+You can pipe the output of Table into any other tool for higher-level 
processing, see the examples in @ref{Invoking asttable} for some simple 
examples.
 
 @menu
 * Column arithmetic::           How to do operations on table columns.
@@ -11769,86 +8740,57 @@ some simple examples.
 @node Column arithmetic, Invoking asttable, Table, Table
 @subsection Column arithmetic
 
-After reading the requested columns from the input table, you can also do
-operations/arithmetic on the columns and save the resulting values as new
-column(s) in the output table (possibly in between other requested
-columns). To enable column arithmetic, the first 6 characters of the value
-to @option{--column} (@code{-c}) should be the arithmetic activation word
-`@option{arith }' (note the space character in the end, after
-`@code{arith}').
-
-After the activation word, you can use the reverse polish notation to
-identify the operators and their operands, see @ref{Reverse polish
-notation}. Just note that white-space characters are used between the
-tokens of the arithmetic expression and that they are meaningful to the
-command-line environment. Therefore the whole expression (including the
-activation word) has to be quoted on the command-line or in a shell script
-(see the examples below).
-
-To identify a column you can directly use its name, or specify its number
-(counting from one, see @ref{Selecting table columns}). When you are giving
-a column number, it is necessary to prefix it with a @code{c} (otherwise it
-is not distinguishable from a constant number to use in the arithmetic
-operation).
-
-For example with the command below, the first two columns of
-@file{table.fits} will be printed along with a third column that is the
-result of multiplying the first column with @mymath{10^{10}} (for example
-to convert wavelength from Meters to Angstroms). Note how without the
-`@key{c}', it is not possible to distinguish between @key{1} as a
-column-counter, or as a constant number to use in the arithmetic operation.
+After reading the requested columns from the input table, you can also do 
operations/arithmetic on the columns and save the resulting values as new 
column(s) in the output table (possibly in between other requested columns).
+To enable column arithmetic, the first 6 characters of the value to 
@option{--column} (@code{-c}) should be the arithmetic activation word 
`@option{arith }' (note the space character in the end, after `@code{arith}').
+
+After the activation word, you can use the reverse polish notation to identify 
the operators and their operands, see @ref{Reverse polish notation}.
+Just note that white-space characters are used between the tokens of the 
arithmetic expression and that they are meaningful to the command-line 
environment.
+Therefore the whole expression (including the activation word) has to be 
quoted on the command-line or in a shell script (see the examples below).
+
+To identify a column you can directly use its name, or specify its number 
(counting from one, see @ref{Selecting table columns}).
+When you are giving a column number, it is necessary to prefix it with a 
@code{c} (otherwise it is not distinguishable from a constant number to use in 
the arithmetic operation).
+
+For example with the command below, the first two columns of @file{table.fits} 
will be printed along with a third column that is the result of multiplying the 
first column with @mymath{10^{10}} (for example to convert wavelength from 
Meters to Angstroms).
+Note how without the `@key{c}', it is not possible to distinguish between 
@key{1} as a column-counter, or as a constant number to use in the arithmetic 
operation.
 
 @example
 $ asttable table.fits -c1,2 -c"arith c1 1e10 x"
 @end example
 
-Alternatively, if the columns have meta-data and the first two are
-respectively called @code{AWAV} and @code{SPECTRUM}, the command above is
-equivalent to the command below. Note that the @key{c} is no longer
-necessary in this scenario.
+Alternatively, if the columns have meta-data and the first two are 
respectively called @code{AWAV} and @code{SPECTRUM}, the command above is 
equivalent to the command below.
+Note that the @key{c} is no longer necessary in this scenario.
 
 @example
 $ asttable table.fits -cAWAV,SPECTRUM -c"arith AWAV 1e10 x"
 @end example
 
-Comparison of the two commands above clearly shows why it is recommended to
-use column names instead of numbers. When the columns have clear names, the
-command/script actually becomes much more readable, describing the intent
-of the operation. It is also independent of the low-level table structure:
-for the second command, the position of the @code{AWAV} and @code{SPECTRUM}
-columns in @file{table.fits} is irrelevant.
+Comparison of the two commands above clearly shows why it is recommended to 
use column names instead of numbers.
+When the columns have clear names, the command/script actually becomes much 
more readable, describing the intent of the operation.
+It is also independent of the low-level table structure: for the second 
command, the position of the @code{AWAV} and @code{SPECTRUM} columns in 
@file{table.fits} is irrelevant.
 
-Finally, since the arithmetic expressions are a value to @option{--column},
-it doesn't necessarily have to be a separate option, so the commands above
-are also identical to the command below (note that this only has one
-@option{-c} option). Just be very careful with the quoting!
+Finally, since the arithmetic expressions are a value to @option{--column}, it 
doesn't necessarily have to be a separate option, so the commands above are 
also identical to the command below (note that this only has one @option{-c} 
option).
+Just be very careful with the quoting!
 
 @example
 $ asttable table.fits -cAWAV,SPECTRUM,"arith AWAV 1e10 x"
 @end example
 
-Almost all the arithmetic operators of @ref{Arithmetic operators} are also
-supported for column arithmetic in Table. In particular, the few that are
-not present in the Gnuastro library aren't yet supported. For a list of the
-Gnuastro library arithmetic operators, please see the macros starting with
-@code{GAL_ARITHMETIC_OP} and ending with the operator name in
-@ref{Arithmetic on datasets}. Besides the operators in @ref{Arithmetic
-operators}, several operators are only available in Table to use on table
-columns.
+Almost all the arithmetic operators of @ref{Arithmetic operators} are also 
supported for column arithmetic in Table.
+In particular, the few that are not present in the Gnuastro library aren't yet 
supported.
+For a list of the Gnuastro library arithmetic operators, please see the macros 
starting with @code{GAL_ARITHMETIC_OP} and ending with the operator name in 
@ref{Arithmetic on datasets}.
+Besides the operators in @ref{Arithmetic operators}, several operators are 
only available in Table to use on table columns.
 
 @cindex WCS: World Coordinate System
 @cindex World Coordinate System (WCS)
 @table @code
 @item wcstoimg
-Convert the given WCS positions to image/dataset coordinates based on the
-number of dimensions in the WCS structure of @option{--wcshdu}
-extension/HDU in @option{--wcsfile}. It will output the same number of
-columns. The first popped operand is the last FITS dimension.
+Convert the given WCS positions to image/dataset coordinates based on the 
number of dimensions in the WCS structure of @option{--wcshdu} extension/HDU in 
@option{--wcsfile}.
+It will output the same number of columns.
+The first popped operand is the last FITS dimension.
 
-For example the two commands below (which have the same output) will
-produce 5 columns. The first three columns are the input table's ID, RA and
-Dec columns. The fourth and fifth columns will be the pixel positions in
-@file{image.fits} that correspond to each RA and Dec.
+For example the two commands below (which have the same output) will produce 5 
columns.
+The first three columns are the input table's ID, RA and Dec columns.
+The fourth and fifth columns will be the pixel positions in @file{image.fits} 
that correspond to each RA and Dec.
 
 @example
 $ asttable table.fits -cID,RA,DEC,"arith RA DEC wcstoimg" \
@@ -11858,19 +8800,16 @@ $ asttable table.fits -cID,RA -cDEC \
 @end example
 
 @item imgtowcs
-Similar to @code{wcstoimg}, except that image/dataset coordinates are
-converted to WCS coordinates.
+Similar to @code{wcstoimg}, except that image/dataset coordinates are 
converted to WCS coordinates.
 @end table
 
 
 @node Invoking asttable,  , Column arithmetic, Table
 @subsection Invoking Table
 
-Table will read/write, select, convert, or show the information of the
-columns in FITS ASCII table, FITS binary table and plain text table files,
-see @ref{Tables}. Output columns can also be determined by number or
-regular expression matching of column names, units, or comments. The
-executable name is @file{asttable} with the following general template
+Table will read/write, select, convert, or show the information of the columns 
in FITS ASCII table, FITS binary table and plain text table files, see 
@ref{Tables}.
+Output columns can also be determined by number or regular expression matching 
of column names, units, or comments.
+The executable name is @file{asttable} with the following general template
 
 @example
 $ asttable [OPTION...] InputFile
@@ -11909,125 +8848,115 @@ $ asttable bintab.fits --sort=3 -ooutput.txt
 
 ## Subtract the first column from the second in `cat.txt' (can also
 ## be a FITS table) and keep the third and fourth columns.
-$ awk cat.txt -c"arith c2 c1 -",3,4 -ocat.fits
-@end example
-
-Table's input dataset can be given either as a file or from Standard input
-(see @ref{Standard input}). In the absence of selected columns, all the
-input's columns and rows will be written to the output. If any output file
-is explicitly requested (with @option{--output}) the output table will be
-written in it. When no output file is explicitly requested the output table
-will be written to the standard output.
-
-If the specified output is a FITS file, the type of FITS table (binary or
-ASCII) will be determined from the @option{--tabletype} option. If the
-output is not a FITS file, it will be printed as a plain text table (with
-space characters between the columns). When the columns are accompanied by
-meta-data (like column name, units, or comments), this information will
-also printed in the plain text file before the table, as described in
-@ref{Gnuastro text table format}.
-
-For the full list of options common to all Gnuastro programs please see
-@ref{Common options}. Options can also be stored in directory, user or
-system-wide configuration files to avoid repeating on the command-line, see
-@ref{Configuration files}. Table does not follow Automatic output that is
-common in most Gnuastro programs, see @ref{Automatic output}. Thus, in the
-absence of an output file, the selected columns will be printed on the
-command-line with no column information, ready for redirecting to other
-tools like AWK or sort, similar to the examples above.
+$ asttable cat.txt -c"arith c2 c1 -",3,4 -ocat.fits
+@end example
+
+Table's input dataset can be given either as a file or from Standard input 
(see @ref{Standard input}).
+In the absence of selected columns, all the input's columns and rows will be 
written to the output.
+If any output file is explicitly requested (with @option{--output}) the output 
table will be written in it.
+When no output file is explicitly requested the output table will be written 
to the standard output.
+
+If the specified output is a FITS file, the type of FITS table (binary or 
ASCII) will be determined from the @option{--tabletype} option.
+If the output is not a FITS file, it will be printed as a plain text table 
(with space characters between the columns).
+When the columns are accompanied by meta-data (like column name, units, or 
comments), this information will also printed in the plain text file before the 
table, as described in @ref{Gnuastro text table format}.
+
+For the full list of options common to all Gnuastro programs please see 
@ref{Common options}.
+Options can also be stored in directory, user or system-wide configuration 
files to avoid repeating on the command-line, see @ref{Configuration files}.
+Table does not follow Automatic output that is common in most Gnuastro 
programs, see @ref{Automatic output}.
+Thus, in the absence of an output file, the selected columns will be printed 
on the command-line with no column information, ready for redirecting to other 
tools like AWK or sort, similar to the examples above.
 
 @table @option
 
 @item -i
 @itemx --information
-Only print the column information in the specified table on the
-command-line and exit. Each column's information (number, name, units, data
-type, and comments) will be printed as a row on the command-line. Note that
-the FITS standard only requires the data type (see @ref{Numeric data
-types}), and in plain text tables, no meta-data/information is
-mandatory. Gnuastro has its own convention in the comments of a plain text
-table to store and transfer this information as described in @ref{Gnuastro
-text table format}.
-
-This option will take precedence over the @option{--column} option, so when
-it is called along with requested columns, the latter will be ignored. This
-can be useful if you forget the identifier of a column after you have
-already typed some on the command-line. You can simply add a @option{-i}
-and run Table to see the whole list and remember. Then you can use the
-shell history (with the up arrow key on the keyboard), and retrieve the
-last command with all the previously typed columns present, delete
-@option{-i} and add the identifier you had forgot.
+Only print the column information in the specified table on the command-line 
and exit.
+Each column's information (number, name, units, data type, and comments) will 
be printed as a row on the command-line.
+Note that the FITS standard only requires the data type (see @ref{Numeric data 
types}), and in plain text tables, no meta-data/information is mandatory.
+Gnuastro has its own convention in the comments of a plain text table to store 
and transfer this information as described in @ref{Gnuastro text table format}.
+
+This option will take precedence over the @option{--column} option, so when it 
is called along with requested columns, the latter will be ignored.
+This can be useful if you forget the identifier of a column after you have 
already typed some on the command-line.
+You can simply add a @option{-i} and run Table to see the whole list and 
remember.
+Then you can use the shell history (with the up arrow key on the keyboard), 
and retrieve the last command with all the previously typed columns present, 
delete @option{-i} and add the identifier you had forgot.
 
 @cindex AWK
 @cindex GNU AWK
 @item -c STR/INT
 @itemx --column=STR/INT
-Set the output columns either by specifying the column number, or name. For
-more on selecting columns, see @ref{Selecting table columns}. If a value of
-this option starts with `@code{arith }', this option will do the requested
-operations/arithmetic on the specified columns and output the result in
-that place (among other requested columns). For more on column arithmetic
-see @ref{Column arithmetic}.
-
-To ask for multiple columns this option can be used in two way: 1) multiple
-calls to this option, 2) using a comma between each column specifier in one
-call to this option. These different solutions may be mixed in one call to
-Table: for example, @option{-cRA,DEC -cMAG}, or @option{-cRA -cDEC -cMAG}
-are both equivalent to @option{-cRA -cDEC -cMAG}. The order of the output
-columns will be the same order given to the option or in the configuration
-files (see @ref{Configuration file precedence}).
-
-This option is not mandatory, if no specific columns are requested, all the
-input table columns are output. When this option is called multiple times,
-it is possible to output one column more than once.
+Set the output columns either by specifying the column number, or name.
+For more on selecting columns, see @ref{Selecting table columns}.
+If a value of this option starts with `@code{arith }', this option will do the 
requested operations/arithmetic on the specified columns and output the result 
in that place (among other requested columns).
+For more on column arithmetic see @ref{Column arithmetic}.
+
+To ask for multiple columns this option can be used in two way: 1) multiple 
calls to this option, 2) using a comma between each column specifier in one 
call to this option.
+These different solutions may be mixed in one call to Table: for example, 
@option{-cRA,DEC -cMAG}, or @option{-cRA -cDEC -cMAG} are both equivalent to 
@option{-cRA -cDEC -cMAG}.
+The order of the output columns will be the same order given to the option or 
in the configuration files (see @ref{Configuration file precedence}).
+
+This option is not mandatory, if no specific columns are requested, all the 
input table columns are output.
+When this option is called multiple times, it is possible to output one column 
more than once.
 
 @item -w STR
 @itemx --wcsfile=STR
-FITS file that contains the WCS to be used in the @code{wcstoimg} and
-@code{imgtowcs} operators of @option{--column} (see above). The extension
-name/number within the FITS file can be specified with @option{--wcshdu}.
+FITS file that contains the WCS to be used in the @code{wcstoimg} and 
@code{imgtowcs} operators of @option{--column} (see above).
+The extension name/number within the FITS file can be specified with 
@option{--wcshdu}.
 
 @item -W STR
 @itemx --wcshdu=STR
-FITS extension/HDU that contains the WCS to be used in the @code{wcstoimg}
-and @code{imgtowcs} operators of @option{--column} (see above). The FITS
-file name can be specified with @option{--wcsfile}.
+FITS extension/HDU that contains the WCS to be used in the @code{wcstoimg} and 
@code{imgtowcs} operators of @option{--column} (see above).
+The FITS file name can be specified with @option{--wcsfile}.
 
 @item -O
 @itemx --colinfoinstdout
 @cindex Standard output
-Add column metadata when the output is printed in the standard
-output. Usually the standard output is used for a fast visual check or to
-pipe into other program for further processing. So by default meta-data
-aren't included.
+Add column metadata when the output is printed in the standard output.
+Usually the standard output is used for a fast visual check or to pipe into 
other program for further processing.
+So by default meta-data aren't included.
 
 @item -r STR,FLT:FLT
 @itemx --range=STR,FLT:FLT
-Only print the output rows that have a value within the given range in the
-@code{STR} column (can be a name or counter). For example with
-@code{--range=sn,5:20} the output's columns will only contain rows that
-have a value between 5 and 20 in the @code{sn} column (not case-sensitive).
+Only output rows that have a value within the given range in the @code{STR} 
column (can be a name or counter).
+Note that the range is only inclusive in the lower-limit.
+For example with @code{--range=sn,5:20} the output's columns will only contain 
rows that have a value in the @code{sn} column (not case-sensitive) that is 
greater or equal to 5, and less than 20.
 
-This option can be called multiple times (different ranges for different
-columns) in one run of the Table program. This is very useful for selecting
-the final rows from multiple criteria/columns.
+This option can be called multiple times (different ranges for different 
columns) in one run of the Table program.
+This is very useful for selecting the final rows from multiple 
criteria/columns.
 
-The chosen column doesn't have to be in the output columns. This is good
-when you just want to select using one column's values, but don't need that
-column anymore afterwards.
+The chosen column doesn't have to be in the output columns.
+This is good when you just want to select using one column's values, but don't 
need that column anymore afterwards.
 
 For one example of using this option, see the example under
 @option{--sigclip-median} in @ref{Invoking aststatistics}.
 
-@item -s STR
-@item --sort=STR
-Sort the output rows based on the values in the @code{STR} column (can be a
-column name or number). By default the sort is done in ascending/increasing
-order, to sort in a descending order, use @option{--descending}.
+@item -e STR,INT/FLT,...
+@itemx --equal=STR,INT/FLT,...
+Only output rows that are equal to the given number(s) in the given column.
+The first argument is the column identifier (name or number, see 
@ref{Selecting table columns}), after that you can specify any number of values.
+For example @option{--equal=ID,5,6,8} will only print the rows that have a 
value of 5, 6, or 8 in the @code{ID} column.
+This option can also be called multiple times, so @option{--equal=ID,4,5 
--equal=ID,6,7} has the same effect as @option{--equal=4,5,6,7}.
 
-The chosen column doesn't have to be in the output columns. This is good
-when you just want to sort using one column's values, but don't need that
-column anymore afterwards.
+@cartouche
+@noindent
+@strong{Equality and floating point numbers:} Floating point numbers are only 
approximate values (see @ref{Numeric data types}).
+In this context, their equality depends on how the the input table was 
originally stored (as a plain text table or as an ASCII/binary FITS table).
+If you want to select floating point numbers, it is strongly recommended to 
use the @option{--range} option and set a very small interval around your 
desired number, don't use @option{--equal} or @option{--notequal}.
+@end cartouche
+
+@item -n STR,INT/FLT,...
+@itemx --notequal=STR,INT/FLT,...
+Only output rows that are @emph{not} equal to the given number(s) in the given 
column.
+The first argument is the column identifier (name or number, see 
@ref{Selecting table columns}), after that you can specify any number of values.
+For example @option{--notequal=ID,5,6,8} will only print the rows where the 
@code{ID} column doesn't have value of 5, 6, or 8.
+This option can also be called multiple times, so @option{--notequal=ID,4,5 
--notequal=ID,6,7} has the same effect as @option{--notequal=4,5,6,7}.
+
+Be very careful if you want to use the non-equality with floating point 
numbers, see the special note under @option{--equal} for more.
+
+@item -s STR
+@item --sort=STR
+Sort the output rows based on the values in the @code{STR} column (can be a 
column name or number).
+By default the sort is done in ascending/increasing order, to sort in a 
descending order, use @option{--descending}.
+
+The chosen column doesn't have to be in the output columns.
+This is good when you just want to sort using one column's values, but don't 
need that column anymore afterwards.
 
 @item -d
 @itemx --descending
@@ -12035,22 +8964,18 @@ When called with @option{--sort}, rows will be sorted 
in descending order.
 
 @item -H INT
 @itemx --head=INT
-Only print the given number of rows from the @emph{top} of the final
-table. Note that this option only affects the @emph{output} table. For
-example if you use @option{--sort}, or @option{--range}, the printed rows
-are the first @emph{after} applying the sort sorting, or selecting a range
-of the full input.
+Only print the given number of rows from the @emph{top} of the final table.
+Note that this option only affects the @emph{output} table.
+For example if you use @option{--sort}, or @option{--range}, the printed rows 
are the first @emph{after} applying the sort sorting, or selecting a range of 
the full input.
 
 @cindex GNU Coreutils
-If the given value to @option{--head} is 0, the output columns won't have
-any rows and if its larger than the number of rows in the input table, all
-the rows are printed (this option is effectively ignored). This behavior is
-taken from the @command{head} program in GNU Coreutils.
+If the given value to @option{--head} is 0, the output columns won't have any 
rows and if its larger than the number of rows in the input table, all the rows 
are printed (this option is effectively ignored).
+This behavior is taken from the @command{head} program in GNU Coreutils.
 
 @item -t INT
 @itemx --tail=INT
-Only print the given number of rows from the @emph{bottom} of the final
-table. See @option{--head} for more.
+Only print the given number of rows from the @emph{bottom} of the final table.
+See @option{--head} for more.
 
 @end table
 
@@ -12073,11 +8998,9 @@ table. See @option{--head} for more.
 @node Data manipulation, Data analysis, Data containers, Top
 @chapter Data manipulation
 
-Images are one of the major formats of data that is used in astronomy. The
-functions in this chapter explain the GNU Astronomy Utilities which are
-provided for their manipulation. For example cropping out a part of a
-larger image or convolving the image with a given kernel or applying a
-transformation to it.
+Images are one of the major formats of data that is used in astronomy.
+The functions in this chapter explain the GNU Astronomy Utilities which are 
provided for their manipulation.
+For example cropping out a part of a larger image or convolving the image with 
a given kernel or applying a transformation to it.
 
 @menu
 * Crop::                        Crop region(s) from a dataset.
@@ -12094,12 +9017,10 @@ transformation to it.
 @cindex Postage stamp images
 @cindex Large astronomical images
 @pindex @r{Crop (}astcrop@r{)}
-Astronomical images are often very large, filled with thousands of
-galaxies. It often happens that you only want a section of the image, or
-you have a catalog of sources and you want to visually analyze them in
-small postage stamps. Crop is made to do all these things. When more than
-one crop is required, Crop will divide the crops between multiple threads
-to significantly reduce the run time.
+Astronomical images are often very large, filled with thousands of galaxies.
+It often happens that you only want a section of the image, or you have a 
catalog of sources and you want to visually analyze them in small postage 
stamps.
+Crop is made to do all these things.
+When more than one crop is required, Crop will divide the crops between 
multiple threads to significantly reduce the run time.
 
 @cindex Mosaicing
 @cindex Image tiles
@@ -12107,29 +9028,19 @@ to significantly reduce the run time.
 @cindex COSMOS survey
 @cindex Imaging surveys
 @cindex Hubble Space Telescope (HST)
-Astronomical surveys are usually extremely large. So large in fact, that
-the whole survey will not fit into a reasonably sized file. Because of
-this, surveys usually cut the final image into separate tiles and store
-each tile in a file. For example the COSMOS survey's Hubble space
-telescope, ACS F814W image consists of 81 separate FITS images, with each
-one having a volume of 1.7 Giga bytes.
+Astronomical surveys are usually extremely large.
+So large in fact, that the whole survey will not fit into a reasonably sized 
file.
+Because of this, surveys usually cut the final image into separate tiles and 
store each tile in a file.
+For example the COSMOS survey's Hubble space telescope, ACS F814W image 
consists of 81 separate FITS images, with each one having a volume of 1.7 Giga 
bytes.
 
 @cindex Stitch multiple images
-Even though the tile sizes are chosen to be large enough that too many
-galaxies/targets don't fall on the edges of the tiles, inevitably some
-do. So when you simply crop the image of such targets from one tile, you
-will miss a large area of the surrounding sky (which is essential in
-estimating the noise). Therefore in its WCS mode, Crop will stitch parts of
-the tiles that are relevant for a target (with the given width) from all
-the input images that cover that region into the output. Of course, the
-tiles have to be present in the list of input files.
-
-Besides cropping postage stamps around certain coordinates, Crop can also
-crop arbitrary polygons from an image (or a set of tiles by stitching the
-relevant parts of different tiles within the polygon), see
-@option{--polygon} in @ref{Invoking astcrop}. Alternatively, it can crop
-out rectangular regions through the @option{--section} option from one
-image, see @ref{Crop section syntax}.
+Even though the tile sizes are chosen to be large enough that too many 
galaxies/targets don't fall on the edges of the tiles, inevitably some do.
+So when you simply crop the image of such targets from one tile, you will miss 
a large area of the surrounding sky (which is essential in estimating the 
noise).
+Therefore in its WCS mode, Crop will stitch parts of the tiles that are 
relevant for a target (with the given width) from all the input images that 
cover that region into the output.
+Of course, the tiles have to be present in the list of input files.
+
+Besides cropping postage stamps around certain coordinates, Crop can also crop 
arbitrary polygons from an image (or a set of tiles by stitching the relevant 
parts of different tiles within the polygon), see @option{--polygon} in 
@ref{Invoking astcrop}.
+Alternatively, it can crop out rectangular regions through the 
@option{--section} option from one image, see @ref{Crop section syntax}.
 
 @menu
 * Crop modes::                  Basic modes to define crop region.
@@ -12140,14 +9051,12 @@ image, see @ref{Crop section syntax}.
 
 @node Crop modes, Crop section syntax, Crop, Crop
 @subsection Crop modes
-In order to be comprehensive, intuitive, and easy to use, there are two
-ways to define the crop:
+In order to be comprehensive, intuitive, and easy to use, there are two ways 
to define the crop:
 
 @enumerate
 @item
-From its center and side length. For example if you already know the
-coordinates of an object and want to inspect it in an image or to generate
-postage stamps of a catalog containing many such coordinates.
+From its center and side length.
+For example if you already know the coordinates of an object and want to 
inspect it in an image or to generate postage stamps of a catalog containing 
many such coordinates.
 
 @item
 The vertices of the crop region, this can be useful for larger crops over
@@ -12155,227 +9064,156 @@ many targets, for example to crop out a uniformly 
deep, or contiguous,
 region of a large survey.
 @end enumerate
 
-Irrespective of how the crop region is defined, the coordinates to define
-the crop can be in Image (pixel) or World Coordinate System (WCS)
-standards. All coordinates are read as floating point numbers (not
-integers, except for the @option{--section} option, see below). By setting
-the @emph{mode} in Crop, you define the standard that the given coordinates
-must be interpreted. Here, the different ways to specify the crop region
-are discussed within each standard. For the full list options, please see
-@ref{Invoking astcrop}.
-
-When the crop is defined by its center, the respective (integer) central
-pixel position will be found internally according to the FITS standard. To
-have this pixel positioned in the center of the cropped region, the final
-cropped region will have an add number of pixels (even if you give an even
-number to @option{--width} in image mode).
-
-Furthermore, when the crop is defined as by its center, Crop allows you to
-only keep crops what don't have any blank pixels in the vicinity of their
-center (your primary target). This can be very convenient when your input
-catalog/coordinates originated from another survey/filter which is not
-fully covered by your input image, to learn more about this feature, please
-see the description of the @option{--checkcenter} option in @ref{Invoking
-astcrop}.
+Irrespective of how the crop region is defined, the coordinates to define the 
crop can be in Image (pixel) or World Coordinate System (WCS) standards.
+All coordinates are read as floating point numbers (not integers, except for 
the @option{--section} option, see below).
+By setting the @emph{mode} in Crop, you define the standard that the given 
coordinates must be interpreted.
+Here, the different ways to specify the crop region are discussed within each 
standard.
+For the full list options, please see @ref{Invoking astcrop}.
+
+When the crop is defined by its center, the respective (integer) central pixel 
position will be found internally according to the FITS standard.
+To have this pixel positioned in the center of the cropped region, the final 
cropped region will have an add number of pixels (even if you give an even 
number to @option{--width} in image mode).
+
+Furthermore, when the crop is defined as by its center, Crop allows you to 
only keep crops what don't have any blank pixels in the vicinity of their 
center (your primary target).
+This can be very convenient when your input catalog/coordinates originated 
from another survey/filter which is not fully covered by your input image, to 
learn more about this feature, please see the description of the 
@option{--checkcenter} option in @ref{Invoking astcrop}.
 
 @table @asis
 @item Image coordinates
-In image mode (@option{--mode=img}), Crop interprets the pixel coordinates
-and widths in units of the input data-elements (for example pixels in an
-image, not world coordinates). In image mode, only one image may be
-input. The output crop(s) can be defined in multiple ways as listed below.
+In image mode (@option{--mode=img}), Crop interprets the pixel coordinates and 
widths in units of the input data-elements (for example pixels in an image, not 
world coordinates).
+In image mode, only one image may be input.
+The output crop(s) can be defined in multiple ways as listed below.
 
 @table @asis
 @item Center of multiple crops (in a catalog)
-The center of (possibly multiple) crops are read from a text file. In this
-mode, the columns identified with the @option{--coordcol} option are
-interpreted as the center of a crop with a width of @option{--width} pixels
-along each dimension. The columns can contain any floating point value. The
-value to @option{--output} option is seen as a directory which will host
-(the possibly multiple) separate crop files, see @ref{Crop output} for
-more. For a tutorial using this feature, please see @ref{Finding reddest
-clumps and visual inspection}.
+The center of (possibly multiple) crops are read from a text file.
+In this mode, the columns identified with the @option{--coordcol} option are 
interpreted as the center of a crop with a width of @option{--width} pixels 
along each dimension.
+The columns can contain any floating point value.
+The value to @option{--output} option is seen as a directory which will host 
(the possibly multiple) separate crop files, see @ref{Crop output} for more.
+For a tutorial using this feature, please see @ref{Finding reddest clumps and 
visual inspection}.
 
 @item Center of a single crop (on the command-line)
-The center of the crop is given on the command-line with the
-@option{--center} option. The crop width is specified by the
-@option{--width} option along each dimension. The given coordinates and
-width can be any floating point number.
+The center of the crop is given on the command-line with the @option{--center} 
option.
+The crop width is specified by the @option{--width} option along each 
dimension.
+The given coordinates and width can be any floating point number.
 
 @item Vertices of a single crop
-In Image mode there are two options to define the vertices of a region to
-crop: @option{--section} and @option{--polygon}. The former is lower-level
-(doesn't accept floating point vertices, and only a rectangular region can
-be defined), it is also only available in Image mode. Please see @ref{Crop
-section syntax} for a full description of this method.
-
-The latter option (@option{--polygon}) is a higher-level method to define
-any convex polygon (with any number of vertices) with floating point
-values. Please see the description of this option in @ref{Invoking astcrop}
-for its syntax.
+In Image mode there are two options to define the vertices of a region to 
crop: @option{--section} and @option{--polygon}.
+The former is lower-level (doesn't accept floating point vertices, and only a 
rectangular region can be defined), it is also only available in Image mode.
+Please see @ref{Crop section syntax} for a full description of this method.
+
+The latter option (@option{--polygon}) is a higher-level method to define any 
convex polygon (with any number of vertices) with floating point values.
+Please see the description of this option in @ref{Invoking astcrop} for its 
syntax.
 @end table
 
 @item WCS coordinates
-In WCS mode (@option{--mode=wcs}), the coordinates and widths are
-interpreted using the World Coordinate System (WCS, that must accompany
-the dataset), not pixel coordinates. In WCS mode, Crop accepts multiple
-datasets as input. When the cropped region (defined by its center or
-vertices) overlaps with multiple of the input images/tiles, the overlapping
-regions will be taken from the respective input (they will be stitched when
-necessary for each output crop).
-
-In this mode, the input images do not necessarily have to be the same size,
-they just need to have the same orientation and pixel resolution. Currently
-only orientation along the celestial coordinates is accepted, if your input
-has a different orientation you can use Warp's @option{--align} option to
-align the image before cropping it (see @ref{Warp}).
-
-Each individual input image/tile can even be smaller than the final
-crop. In any case, any part of any of the input images which overlaps with
-the desired region will be used in the crop. Note that if there is an
-overlap in the input images/tiles, the pixels from the last input image
-read are going to be used for the overlap. Crop will not change pixel
-values, so it assumes your overlapping tiles were cutout from the same
-original image. There are multiple ways to define your cropped region as
-listed below.
+In WCS mode (@option{--mode=wcs}), the coordinates and widths are interpreted 
using the World Coordinate System (WCS, that must accompany the dataset), not 
pixel coordinates.
+In WCS mode, Crop accepts multiple datasets as input.
+When the cropped region (defined by its center or vertices) overlaps with 
multiple of the input images/tiles, the overlapping regions will be taken from 
the respective input (they will be stitched when necessary for each output 
crop).
+
+In this mode, the input images do not necessarily have to be the same size, 
they just need to have the same orientation and pixel resolution.
+Currently only orientation along the celestial coordinates is accepted, if 
your input has a different orientation you can use Warp's @option{--align} 
option to align the image before cropping it (see @ref{Warp}).
+
+Each individual input image/tile can even be smaller than the final crop.
+In any case, any part of any of the input images which overlaps with the 
desired region will be used in the crop.
+Note that if there is an overlap in the input images/tiles, the pixels from 
the last input image read are going to be used for the overlap.
+Crop will not change pixel values, so it assumes your overlapping tiles were 
cutout from the same original image.
+There are multiple ways to define your cropped region as listed below.
 
 @table @asis
 
 @item Center of multiple crops (in a catalog)
-Similar to catalog inputs in Image mode (above), except that the values
-along each dimension are assumed to have the same units as the dataset's
-WCS information. For example, the central RA and Dec value for each crop
-will be read from the first and second calls to the @option{--coordcol}
-option. The width of the cropped box (in units of the WCS, or degrees in RA
-and Dec mode) must be specified with the @option{--width} option.
+Similar to catalog inputs in Image mode (above), except that the values along 
each dimension are assumed to have the same units as the dataset's WCS 
information.
+For example, the central RA and Dec value for each crop will be read from the 
first and second calls to the @option{--coordcol} option.
+The width of the cropped box (in units of the WCS, or degrees in RA and Dec 
mode) must be specified with the @option{--width} option.
 
 @item Center of a single crop (on the command-line)
-You can specify the center of only one crop box with the @option{--center}
-option. If it exists in the input images, it will be cropped similar to the
-catalog mode, see above also for @code{--width}.
+You can specify the center of only one crop box with the @option{--center} 
option.
+If it exists in the input images, it will be cropped similar to the catalog 
mode, see above also for @code{--width}.
 
 @item Vertices of a single crop
-The @option{--polygon} option is a high-level method to define any convex
-polygon (with any number of vertices). Please see the description of this
-option in @ref{Invoking astcrop} for its syntax.
+The @option{--polygon} option is a high-level method to define any convex 
polygon (with any number of vertices).
+Please see the description of this option in @ref{Invoking astcrop} for its 
syntax.
 @end table
 
 @cartouche
 @noindent
-@strong{CAUTION:} In WCS mode, the image has to be aligned with the
-celestial coordinates, such that the first FITS axis is parallel (opposite
-direction) to the Right Ascension (RA) and the second FITS axis is parallel
-to the declination. If these conditions aren't met for an image, Crop will
-warn you and abort. You can use Warp's @option{--align} option to align the
-input image with these coordinates, see @ref{Warp}.
+@strong{CAUTION:} In WCS mode, the image has to be aligned with the celestial 
coordinates, such that the first FITS axis is parallel (opposite direction) to 
the Right Ascension (RA) and the second FITS axis is parallel to the 
declination.
+If these conditions aren't met for an image, Crop will warn you and abort.
+You can use Warp's @option{--align} option to align the input image with these 
coordinates, see @ref{Warp}.
 @end cartouche
 
 @end table
 
-As a summary, if you don't specify a catalog, you have to define the
-cropped region manually on the command-line.  In any case the mode is
-mandatory for Crop to be able to interpret the values given as coordinates
-or widths.
+As a summary, if you don't specify a catalog, you have to define the cropped 
region manually on the command-line.
+In any case the mode is mandatory for Crop to be able to interpret the values 
given as coordinates or widths.
 
 
 @node Crop section syntax, Blank pixels, Crop modes, Crop
 @subsection Crop section syntax
 
 @cindex Crop a given section of image
-When in image mode, one of the methods to crop only one rectangular section
-from the input image is to use the @option{--section} option. Crop has a
-powerful syntax to read the box parameters from a string of characters. If
-you leave certain parts of the string to be empty, Crop can fill them for
-you based on the input image sizes.
+When in image mode, one of the methods to crop only one rectangular section 
from the input image is to use the @option{--section} option.
+Crop has a powerful syntax to read the box parameters from a string of 
characters.
+If you leave certain parts of the string to be empty, Crop can fill them for 
you based on the input image sizes.
 
 @cindex Define section to crop
-To define a box, you need the coordinates of two points: the first
-(@code{X1}, @code{Y1}) and the last pixel (@code{X2}, @code{Y2}) pixel
-positions in the image, or four integer numbers in total. The four
-coordinates can be specified with one string in this format:
-`@command{X1:X2,Y1:Y2}'. This string is given to the @option{--section}
-option. Therefore, the pixels along the first axis that are
-@mymath{\geq}@command{X1} and @mymath{\leq}@command{X2} will be included in
-the cropped image. The same goes for the second axis. Note that each
-different term will be read as an integer, not a float. This is a low-level
-option, for a higher-level way to specify region (any polygon, not just a
-box), please see the @option{--polygon} option in @ref{Crop options}. Also
-note that in the FITS standard, pixel indexes along each axis start from
-unity(1) not zero(0).
+To define a box, you need the coordinates of two points: the first (@code{X1}, 
@code{Y1}) and the last pixel (@code{X2}, @code{Y2}) pixel positions in the 
image, or four integer numbers in total.
+The four coordinates can be specified with one string in this format: 
`@command{X1:X2,Y1:Y2}'.
+This string is given to the @option{--section} option.
+Therefore, the pixels along the first axis that are @mymath{\geq}@command{X1} 
and @mymath{\leq}@command{X2} will be included in the cropped image.
+The same goes for the second axis.
+Note that each different term will be read as an integer, not a float.
+This is a low-level option, for a higher-level way to specify region (any 
polygon, not just a box), please see the @option{--polygon} option in @ref{Crop 
options}.
+Also note that in the FITS standard, pixel indexes along each axis start from 
unity(1) not zero(0).
 
 @cindex Crop section format
 You can omit any of the values and they will be filled automatically.
-The left hand side of the colon (@command{:}) will be filled with
-@command{1}, and the right side with the image size. So, @command{2:,:}
-will include the full range of pixels along the second axis and only
-those with a first axis index larger than @command{2} in the first
-axis. If the colon is omitted for a dimension, then the full range is
-automatically used. So the same string is also equal to @command{2:,}
-or @command{2:} or even @command{2}. If you want such a case for the
-second axis, you should set it to: @command{,2}.
-
-If you specify a negative value, it will be seen as before the indexes of
-the image which are outside the image along the bottom or left sides when
-viewed in SAO ds9. In case you want to count from the top or right sides of
-the image, you can use an asterisk (@option{*}). When confronted with a
-@option{*}, Crop will replace it with the maximum length of the image in
-that dimension. So @command{*-10:*+10,*-20:*+20} will mean that the crop
-box will be @math{20\times40} pixels in size and only include the top
-corner of the input image with 3/4 of the image being covered by blank
-pixels, see @ref{Blank pixels}.
-
-If you feel more comfortable with space characters between the values, you
-can use as many space characters as you wish, just be careful to put your
-value in double quotes, for example @command{--section="5:200,
-123:854"}. If you forget the quotes, anything after the first space will
-not be seen by @option{--section} and you will most probably get an error
-because the rest of your string will be read as a filename (which most
-probably doesn't exist). See @ref{Command-line} for a description of how
-the command-line works.
+The left hand side of the colon (@command{:}) will be filled with @command{1}, 
and the right side with the image size.
+So, @command{2:,:} will include the full range of pixels along the second axis 
and only those with a first axis index larger than @command{2} in the first 
axis.
+If the colon is omitted for a dimension, then the full range is automatically 
used.
+So the same string is also equal to @command{2:,} or @command{2:} or even 
@command{2}.
+If you want such a case for the second axis, you should set it to: 
@command{,2}.
+
+If you specify a negative value, it will be seen as before the indexes of the 
image which are outside the image along the bottom or left sides when viewed in 
SAO ds9.
+In case you want to count from the top or right sides of the image, you can 
use an asterisk (@option{*}).
+When confronted with a @option{*}, Crop will replace it with the maximum 
length of the image in that dimension.
+So @command{*-10:*+10,*-20:*+20} will mean that the crop box will be 
@math{20\times40} pixels in size and only include the top corner of the input 
image with 3/4 of the image being covered by blank pixels, see @ref{Blank 
pixels}.
+
+If you feel more comfortable with space characters between the values, you can 
use as many space characters as you wish, just be careful to put your value in 
double quotes, for example @command{--section="5:200, 123:854"}.
+If you forget the quotes, anything after the first space will not be seen by 
@option{--section} and you will most probably get an error because the rest of 
your string will be read as a filename (which most probably doesn't exist).
+See @ref{Command-line} for a description of how the command-line works.
 
 
 @node Blank pixels, Invoking astcrop, Crop section syntax, Crop
 @subsection Blank pixels
 
 @cindex Blank pixel
-The cropped box can potentially include pixels that are beyond the image
-range. For example when a target in the input catalog was very near the
-edge of the input image. The parts of the cropped image that were not in
-the input image will be filled with the following two values depending on
-the data type of the image. In both cases, SAO ds9 will not color code
-those pixels.
+The cropped box can potentially include pixels that are beyond the image range.
+For example when a target in the input catalog was very near the edge of the 
input image.
+The parts of the cropped image that were not in the input image will be filled 
with the following two values depending on the data type of the image.
+In both cases, SAO ds9 will not color code those pixels.
 @itemize
 @item
-If the data type of the image is a floating point type (float or
-double), IEEE NaN (Not a number) will be used.
+If the data type of the image is a floating point type (float or double), IEEE 
NaN (Not a number) will be used.
 @item
-For integer types, pixels out of the image will be filled with the
-value of the @command{BLANK} keyword in the cropped image header. The
-value assigned to it is the lowest value possible for that type, so
-you will probably never need it any way. Only for the unsigned
-character type (@command{BITPIX=8} in the FITS header), the maximum
-value is used because it is unsigned, the smallest value is zero which
-is often meaningful.
+For integer types, pixels out of the image will be filled with the value of 
the @command{BLANK} keyword in the cropped image header.
+The value assigned to it is the lowest value possible for that type, so you 
will probably never need it any way.
+Only for the unsigned character type (@command{BITPIX=8} in the FITS header), 
the maximum value is used because it is unsigned, the smallest value is zero 
which is often meaningful.
 @end itemize
-You can ask for such blank regions to not be included in the output
-crop image using the @option{--noblank} option. In such cases, there
-is no guarantee that the image size of your outputs are what you asked
-for.
+You can ask for such blank regions to not be included in the output crop image 
using the @option{--noblank} option.
+In such cases, there is no guarantee that the image size of your outputs are 
what you asked for.
 
-In some survey images, unfortunately they do not use the
-@command{BLANK} FITS keyword. Instead they just give all pixels
-outside of the survey area a value of zero. So by default, when
-dealing with float or double image types, any values that are 0.0 are
-also regarded as blank regions. This can be turned off with the
-@option{--zeroisnotblank} option.
+In some survey images, unfortunately they do not use the @command{BLANK} FITS 
keyword.
+Instead they just give all pixels outside of the survey area a value of zero.
+So by default, when dealing with float or double image types, any values that 
are 0.0 are also regarded as blank regions.
+This can be turned off with the @option{--zeroisnotblank} option.
 
 
 @node Invoking astcrop,  , Blank pixels, Crop
 @subsection Invoking Crop
 
-Crop will crop a region from an image. If in WCS mode, it will also
-stitch parts from separate images in the input files. The executable name
-is @file{astcrop} with the following general template
+Crop will crop a region from an image.
+If in WCS mode, it will also stitch parts from separate images in the input 
files.
+The executable name is @file{astcrop} with the following general template
 
 @example
 $ astcrop [OPTION...] [ASCIIcatalog] ASTRdata ...
@@ -12404,43 +9242,30 @@ $ astcrop --mode=img --center=568.342,2091.719 
--width=201 image.fits
 @end example
 
 @noindent
-Crop has one mandatory argument which is the input image name(s), shown
-above with @file{ASTRdata ...}. You can use shell expansions, for example
-@command{*} for this if you have lots of images in WCS mode. If the crop
-box centers are in a catalog, you can use the @option{--catalog} option. In
-other cases, you have to provide the single cropped output parameters must
-be given with command-line options. See @ref{Crop output} for how the
-output file name(s) can be specified. For the full list of general options
-to all Gnuastro programs (including Crop), please see @ref{Common options}.
-
-Floating point numbers can be used to specify the crop region (except the
-@option{--section} option, see @ref{Crop section syntax}). In such cases,
-the floating point values will be used to find the desired integer pixel
-indices based on the FITS standard. Hence, Crop ultimately doesn't do any
-sub-pixel cropping (in other words, it doesn't change pixel values). If you
-need such crops, you can use @ref{Warp} to first warp the image to the a
-new pixel grid, then crop from that. For example, let's assume you want a
-crop from pixels 12.982 to 80.982 along the first dimension. You should
-first translate the image by @mymath{-0.482} (note that the edge of a pixel
-is at integer multiples of @mymath{0.5}). So you should run Warp with
-@option{--translate=-0.482,0} and then crop the warped image with
-@option{--section=13:81}.
-
-There are two ways to define the cropped region: with its center or its
-vertices. See @ref{Crop modes} for a full description. In the former case,
-Crop can check if the central region of the cropped image is indeed filled
-with data or is blank (see @ref{Blank pixels}), and not produce any output
-when the center is blank, see the description under @option{--checkcenter}
-for more.
+Crop has one mandatory argument which is the input image name(s), shown above 
with @file{ASTRdata ...}.
+You can use shell expansions, for example @command{*} for this if you have 
lots of images in WCS mode.
+If the crop box centers are in a catalog, you can use the @option{--catalog} 
option.
+In other cases, you have to provide the single cropped output parameters must 
be given with command-line options.
+See @ref{Crop output} for how the output file name(s) can be specified.
+For the full list of general options to all Gnuastro programs (including 
Crop), please see @ref{Common options}.
+
+Floating point numbers can be used to specify the crop region (except the 
@option{--section} option, see @ref{Crop section syntax}).
+In such cases, the floating point values will be used to find the desired 
integer pixel indices based on the FITS standard.
+Hence, Crop ultimately doesn't do any sub-pixel cropping (in other words, it 
doesn't change pixel values).
+If you need such crops, you can use @ref{Warp} to first warp the image to the 
a new pixel grid, then crop from that.
+For example, let's assume you want a crop from pixels 12.982 to 80.982 along 
the first dimension.
+You should first translate the image by @mymath{-0.482} (note that the edge of 
a pixel is at integer multiples of @mymath{0.5}).
+So you should run Warp with @option{--translate=-0.482,0} and then crop the 
warped image with @option{--section=13:81}.
+
+There are two ways to define the cropped region: with its center or its 
vertices.
+See @ref{Crop modes} for a full description.
+In the former case, Crop can check if the central region of the cropped image 
is indeed filled with data or is blank (see @ref{Blank pixels}), and not 
produce any output when the center is blank, see the description under 
@option{--checkcenter} for more.
 
 @cindex Asynchronous thread allocation
-When in catalog mode, Crop will run in parallel unless you set
-@option{--numthreads=1}, see @ref{Multi-threaded operations}. Note that
-when multiple outputs are created with threads, the outputs will not be
-created in the same order. This is because the threads are asynchronous and
-thus not started in order. This has no effect on each output, see
-@ref{Finding reddest clumps and visual inspection} for a tutorial on
-effectively using this feature.
+When in catalog mode, Crop will run in parallel unless you set 
@option{--numthreads=1}, see @ref{Multi-threaded operations}.
+Note that when multiple outputs are created with threads, the outputs will not 
be created in the same order.
+This is because the threads are asynchronous and thus not started in order.
+This has no effect on each output, see @ref{Finding reddest clumps and visual 
inspection} for a tutorial on effectively using this feature.
 
 @menu
 * Crop options::                A list of all the options with explanation.
@@ -12450,19 +9275,13 @@ effectively using this feature.
 @node Crop options, Crop output, Invoking astcrop, Invoking astcrop
 @subsubsection Crop options
 
-The options can be classified into the following contexts: Input,
-Output and operating mode options. Options that are common to all
-Gnuastro program are listed in @ref{Common options} and will not be
-repeated here.
+The options can be classified into the following contexts: Input, Output and 
operating mode options.
+Options that are common to all Gnuastro program are listed in @ref{Common 
options} and will not be repeated here.
 
-When you are specifying the crop vertices your self (through
-@option{--section}, or @option{--polygon}) on relatively small regions
-(depending on the resolution of your images) the outputs from image and WCS
-mode can be approximately equivalent. However, as the crop sizes get large,
-the curved nature of the WCS coordinates have to be considered. For
-example, when using @option{--section}, the right ascension of the bottom
-left and top left corners will not be equal. If you only want regions
-within a given right ascension, use @option{--polygon} in WCS mode.
+When you are specifying the crop vertices your self (through 
@option{--section}, or @option{--polygon}) on relatively small regions 
(depending on the resolution of your images) the outputs from image and WCS 
mode can be approximately equivalent.
+However, as the crop sizes get large, the curved nature of the WCS coordinates 
have to be considered.
+For example, when using @option{--section}, the right ascension of the bottom 
left and top left corners will not be equal.
+If you only want regions within a given right ascension, use 
@option{--polygon} in WCS mode.
 
 @noindent
 Input image parameters:
@@ -12470,29 +9289,20 @@ Input image parameters:
 
 @item --hstartwcs=INT
 @cindex CANDELS survey
-Specify the first keyword card (line number) to start finding the input
-image world coordinate system information. Distortions were only recently
-included in WCSLIB (from version 5). Therefore until now, different
-telescope would apply their own specific set of WCS keywords and put them
-into the image header along with those that WCSLIB does recognize. So now
-that WCSLIB recognizes most of the standard distortion parameters, they
-will get confused with the old ones and give completely wrong results. For
-example in the CANDELS-GOODS South
-images@footnote{@url{https://archive.stsci.edu/pub/hlsp/candels/goods-s/gs-tot/v1.0/}}.
-
-The two @option{--hstartwcs} and @option{--hendwcs} are thus provided
-so when using older datasets, you can specify what region in the FITS
-headers you want to use to read the WCS keywords. Note that this is
-only relevant for reading the WCS information, basic data information
-like the image size are read separately. These two options will only
-be considered when the value to @option{--hendwcs} is larger than that
-of @option{--hstartwcs}. So if they are equal or @option{--hstartwcs}
-is larger than @option{--hendwcs}, then all the input keywords will be
-parsed to get the WCS information of the image.
+Specify the first keyword card (line number) to start finding the input image 
world coordinate system information.
+Distortions were only recently included in WCSLIB (from version 5).
+Therefore until now, different telescope would apply their own specific set of 
WCS keywords and put them into the image header along with those that WCSLIB 
does recognize.
+So now that WCSLIB recognizes most of the standard distortion parameters, they 
will get confused with the old ones and give completely wrong results.
+For example in the CANDELS-GOODS South 
images@footnote{@url{https://archive.stsci.edu/pub/hlsp/candels/goods-s/gs-tot/v1.0/}}.
+
+The two @option{--hstartwcs} and @option{--hendwcs} are thus provided so when 
using older datasets, you can specify what region in the FITS headers you want 
to use to read the WCS keywords.
+Note that this is only relevant for reading the WCS information, basic data 
information like the image size are read separately.
+These two options will only be considered when the value to @option{--hendwcs} 
is larger than that of @option{--hstartwcs}.
+So if they are equal or @option{--hstartwcs} is larger than 
@option{--hendwcs}, then all the input keywords will be parsed to get the WCS 
information of the image.
 
 @item --hendwcs=INT
-Specify the last keyword card to read for specifying the image world
-coordinate system on the input images. See @option{--hstartwcs}
+Specify the last keyword card to read for specifying the image world 
coordinate system on the input images.
+See @option{--hstartwcs}
 
 @end table
 
@@ -12502,71 +9312,51 @@ Crop box parameters:
 
 @item -c FLT[,FLT[,...]]
 @itemx --center=FLT[,FLT[,...]]
-The central position of the crop in the input image. The positions along
-each dimension must be separated by a comma (@key{,}) and fractions are
-also acceptable. The number of values given to this option must be the same
-as the dimensions of the input dataset. The width of the crop should be set
-with @code{--width}. The units of the coordinates are read based on the
-value to the @option{--mode} option, see below.
+The central position of the crop in the input image.
+The positions along each dimension must be separated by a comma (@key{,}) and 
fractions are also acceptable.
+The number of values given to this option must be the same as the dimensions 
of the input dataset.
+The width of the crop should be set with @code{--width}.
+The units of the coordinates are read based on the value to the 
@option{--mode} option, see below.
 
 @item -w FLT[,FLT[,...]]
 @itemx --width=FLT[,FLT[,...]]
-Width of the cropped region about its center. @option{--width} may take
-either a single value (to be used for all dimensions) or multiple values (a
-specific value for each dimension). If in WCS mode, value(s) given to this
-option will be read in the same units as the dataset's WCS information
-along this dimension. The final output will have an odd number of pixels to
-allow easy identification of the pixel which keeps your requested
-coordinate (from @option{--center} or @option{--catalog}).
-
-The @code{--width} option also accepts fractions. For example if you want
-the width of your crop to be 3 by 5 arcseconds along RA and Dec
-respectively, you can call it with: @option{--width=3/3600,5/3600}.
-
-If you want an even sided crop, you can run Crop afterwards with
-@option{--section=":*-1,:*-1"} or @option{--section=2:,2:} (depending on
-which side you don't need), see @ref{Crop section syntax}.
+Width of the cropped region about its center.
+@option{--width} may take either a single value (to be used for all 
dimensions) or multiple values (a specific value for each dimension).
+If in WCS mode, value(s) given to this option will be read in the same units 
as the dataset's WCS information along this dimension.
+The final output will have an odd number of pixels to allow easy 
identification of the pixel which keeps your requested coordinate (from 
@option{--center} or @option{--catalog}).
+
+The @code{--width} option also accepts fractions.
+For example if you want the width of your crop to be 3 by 5 arcseconds along 
RA and Dec respectively, you can call it with: @option{--width=3/3600,5/3600}.
+
+If you want an even sided crop, you can run Crop afterwards with 
@option{--section=":*-1,:*-1"} or @option{--section=2:,2:} (depending on which 
side you don't need), see @ref{Crop section syntax}.
 
 @item -l STR
 @itemx --polygon=STR
-String of crop polygon vertices. Note that currently only convex polygons
-should be used. In the future we will make it work for all kinds of
-polygons. Convex polygons are polygons that do not have an internal angle
-more than 180 degrees. This option can be used both in the image and WCS
-modes, see @ref{Crop modes}. The cropped image will be the size of the
-rectangular region that completely encompasses the polygon. By default all
-the pixels that are outside of the polygon will be set as blank values (see
-@ref{Blank pixels}). However, if @option{--outpolygon} is called all pixels
-internal to the vertices will be set to blank.
-
-The syntax for the polygon vertices is similar to, and simpler than, that
-for @option{--section}. In short, the dimensions of each coordinate are
-separated by a comma (@key{,}) and each vertex is separated by a colon
-(@key{:}). You can define as many vertices as you like. If you would like
-to use space characters between the dimensions and vertices to make them
-more human-readable, then you have to put the value to this option in
-double quotation marks.
-
-For example, let's assume you want to work on the deepest part of the
-WFC3/IR images of Hubble Space Telescope eXtreme Deep Field
-(HST-XDF). @url{https://archive.stsci.edu/prepds/xdf/, According to the
-webpage}@footnote{@url{https://archive.stsci.edu/prepds/xdf/}} the deepest
-part is contained within the coordinates:
+String of crop polygon vertices.
+Note that currently only convex polygons should be used.
+In the future we will make it work for all kinds of polygons.
+Convex polygons are polygons that do not have an internal angle more than 180 
degrees.
+This option can be used both in the image and WCS modes, see @ref{Crop modes}.
+The cropped image will be the size of the rectangular region that completely 
encompasses the polygon.
+By default all the pixels that are outside of the polygon will be set as blank 
values (see @ref{Blank pixels}).
+However, if @option{--outpolygon} is called all pixels internal to the 
vertices will be set to blank.
+
+The syntax for the polygon vertices is similar to, and simpler than, that for 
@option{--section}.
+In short, the dimensions of each coordinate are separated by a comma (@key{,}) 
and each vertex is separated by a colon (@key{:}).
+You can define as many vertices as you like.
+If you would like to use space characters between the dimensions and vertices 
to make them more human-readable, then you have to put the value to this option 
in double quotation marks.
+
+For example, let's assume you want to work on the deepest part of the WFC3/IR 
images of Hubble Space Telescope eXtreme Deep Field (HST-XDF).
+@url{https://archive.stsci.edu/prepds/xdf/, According to the 
webpage}@footnote{@url{https://archive.stsci.edu/prepds/xdf/}} the deepest part 
is contained within the coordinates:
 
 @example
 [ (53.187414,-27.779152), (53.159507,-27.759633),
   (53.134517,-27.787144), (53.161906,-27.807208) ]
 @end example
 
-They have provided mask images with only these pixels in the WFC3/IR
-images, but what if you also need to work on the same region in the full
-resolution ACS images? Also what if you want to use the CANDELS data for
-the shallow region? Running Crop with @option{--polygon} will easily pull
-out this region of the image for you irrespective of the resolution. If you
-have set the operating mode to WCS mode in your nearest configuration file
-(see @ref{Configuration files}), there is no need to call
-@option{--mode=wcs} on the command line. You may also provide many FITS
-images/tiles and Crop will stitch them to produce this cropped region:
+They have provided mask images with only these pixels in the WFC3/IR images, 
but what if you also need to work on the same region in the full resolution ACS 
images? Also what if you want to use the CANDELS data for the shallow region? 
Running Crop with @option{--polygon} will easily pull out this region of the 
image for you irrespective of the resolution.
+If you have set the operating mode to WCS mode in your nearest configuration 
file (see @ref{Configuration files}), there is no need to call 
@option{--mode=wcs} on the command line.
+You may also provide many FITS images/tiles and Crop will stitch them to 
produce this cropped region:
 
 @example
 $ astcrop --mode=wcs desired-filter-image(s).fits           \
@@ -12574,30 +9364,19 @@ $ astcrop --mode=wcs desired-filter-image(s).fits      
     \
               53.134517,-27.787144 : 53.161906,-27.807208"
 @end example
 @cindex SAO DS9
-In other cases, you have an image and want to define the polygon yourself
-(it isn't already published like the example above). As the number of
-vertices increases, checking the vertex coordinates on a FITS viewer (for
-example SAO ds9) and typing them in one by one can be very tedious and
-prone to typo errors.
-
-You can take the following steps to avoid the frustration and possible
-typos: Open the image with ds9 and activate its ``region'' mode with
-@clicksequence{Edit@click{}Region}. Then define the region as a polygon
-with @clicksequence{Region@click{}Shape@click{}Polygon}. Click on the
-approximate center of the region you want and a small square will
-appear. By clicking on the vertices of the square you can shrink or expand
-it, clicking and dragging anywhere on the edges will enable you to define a
-new vertex. After the region has been nicely defined, save it as a file
-with @clicksequence{Region@click{}Save Regions}. You can then select the
-name and address of the output file, keep the format as @command{REG} and
-press ``OK''. In the next window, keep format as ``ds9'' and ``Coordinate
-System'' as ``fk5''. A plain text file (let's call it @file{ds9.reg}) is
-now created.
-
-You can now convert this plain text file to Crop's polygon format with this
-command (when typing on the command-line, ignore the ``@key{\}'' at the end
-of the first and second lines along with the extra spaces, these are only
-for nice printing):
+In other cases, you have an image and want to define the polygon yourself (it 
isn't already published like the example above).
+As the number of vertices increases, checking the vertex coordinates on a FITS 
viewer (for example SAO ds9) and typing them in one by one can be very tedious 
and prone to typo errors.
+
+You can take the following steps to avoid the frustration and possible typos: 
Open the image with ds9 and activate its ``region'' mode with 
@clicksequence{Edit@click{}Region}.
+Then define the region as a polygon with 
@clicksequence{Region@click{}Shape@click{}Polygon}.
+Click on the approximate center of the region you want and a small square will 
appear.
+By clicking on the vertices of the square you can shrink or expand it, 
clicking and dragging anywhere on the edges will enable you to define a new 
vertex.
+After the region has been nicely defined, save it as a file with 
@clicksequence{Region@click{}Save Regions}.
+You can then select the name and address of the output file, keep the format 
as @command{REG} and press ``OK''.
+In the next window, keep format as ``ds9'' and ``Coordinate System'' as 
``fk5''.
+A plain text file (let's call it @file{ds9.reg}) is now created.
+
+You can now convert this plain text file to Crop's polygon format with this 
command (when typing on the command-line, ignore the ``@key{\}'' at the end of 
the first and second lines along with the extra spaces, these are only for nice 
printing):
 
 @example
 $ v=$(awk 'NR==4' ds9.reg | sed -e's/polygon(//'        \
@@ -12606,43 +9385,34 @@ $ astcrop --mode=wcs image.fits --polygon=$v
 @end example
 
 @item --outpolygon
-Keep all the regions outside the polygon and mask the inner ones with blank
-pixels (see @ref{Blank pixels}). This is practically the inverse of the
-default mode of treating polygons. Note that this option only works when
-you have only provided one input image. If multiple images are given (in
-WCS mode), then the full area covered by all the images has to be shown and
-the polygon excluded. This can lead to a very large area if large surveys
-like COSMOS are used. So Crop will abort and notify you. In such cases, it
-is best to crop out the larger region you want, then mask the smaller
-region with this option.
+Keep all the regions outside the polygon and mask the inner ones with blank 
pixels (see @ref{Blank pixels}).
+This is practically the inverse of the default mode of treating polygons.
+Note that this option only works when you have only provided one input image.
+If multiple images are given (in WCS mode), then the full area covered by all 
the images has to be shown and the polygon excluded.
+This can lead to a very large area if large surveys like COSMOS are used.
+So Crop will abort and notify you.
+In such cases, it is best to crop out the larger region you want, then mask 
the smaller region with this option.
 
 @item -s STR
 @itemx --section=STR
-Section of the input image which you want to be cropped. See @ref{Crop
-section syntax} for a complete explanation on the syntax required for this
-input.
+Section of the input image which you want to be cropped.
+See @ref{Crop section syntax} for a complete explanation on the syntax 
required for this input.
 
 @item -x STR/INT
 @itemx --coordcol=STR/INT
-The column in a catalog to read as a coordinate. The value can be either
-the column number (starting from 1), or a match/search in the table
-meta-data, see @ref{Selecting table columns}. This option must be called
-multiple times, depending on the number of dimensions in the input
-dataset. If it is called more than necessary, the extra columns (later
-calls to this option on the command-line or configuration files) will be
-ignored, see @ref{Configuration file precedence}.
+The column in a catalog to read as a coordinate.
+The value can be either the column number (starting from 1), or a match/search 
in the table meta-data, see @ref{Selecting table columns}.
+This option must be called multiple times, depending on the number of 
dimensions in the input dataset.
+If it is called more than necessary, the extra columns (later calls to this 
option on the command-line or configuration files) will be ignored, see 
@ref{Configuration file precedence}.
 
 @item -n STR/INT
 @item --namecol=STR/INT
-Column selection of crop file name. The value can be either the column
-number (starting from 1), or a match/search in the table meta-data, see
-@ref{Selecting table columns}. This option can be used both in Image and
-WCS modes, and not a mandatory. When a column is given to this option, the
-final crop base file name will be taken from the contents of this
-column. The directory will be determined by the @option{--output} option
-(current directory if not given) and the value to @option{--suffix} will be
-appended. When this column isn't given, the row number will be used
-instead.
+Column selection of crop file name.
+The value can be either the column number (starting from 1), or a match/search 
in the table meta-data, see @ref{Selecting table columns}.
+This option can be used both in Image and WCS modes, and not a mandatory.
+When a column is given to this option, the final crop base file name will be 
taken from the contents of this column.
+The directory will be determined by the @option{--output} option (current 
directory if not given) and the value to @option{--suffix} will be appended.
+When this column isn't given, the row number will be used instead.
 
 @end table
 
@@ -12653,57 +9423,40 @@ Output options:
 @item -c FLT/INT
 @itemx --checkcenter=FLT/INT
 @cindex Check center of crop
-Square box width of region in the center of the image to check for blank
-values. If any of the pixels in this central region of a crop (defined by
-its center) are blank, then it will not be stored in an output file. If the
-value to this option is zero, no checking is done. This check is only
-applied when the cropped region(s) are defined by their center (not by the
-vertices, see @ref{Crop modes}).
-
-The units of the value are interpreted based on the @option{--mode} value
-(in WCS or pixel units). The ultimate checked region size (in pixels) will
-be an odd integer around the center (converted from WCS, or when an even
-number of pixels are given to this option). In WCS mode, the value can be
-given as fractions, for example if the WCS units are in degrees,
-@code{0.1/3600} will correspond to a check size of 0.1 arcseconds.
-
-Because survey regions don't often have a clean square or rectangle shape,
-some of the pixels on the sides of the survey FITS image don't commonly
-have any data and are blank (see @ref{Blank pixels}). So when the catalog
-was not generated from the input image, it often happens that the image
-does not have data over some of the points.
-
-When the given center of a crop falls in such regions or outside the
-dataset, and this option has a non-zero value, no crop will be
-created. Therefore with this option, you can specify a width of a small box
-(3 pixels is often good enough) around the central pixel of the cropped
-image. You can check which crops were created and which weren't from the
-command-line (if @option{--quiet} was not called, see @ref{Operating mode
-options}), or in Crop's log file (see @ref{Crop output}).
+Square box width of region in the center of the image to check for blank 
values.
+If any of the pixels in this central region of a crop (defined by its center) 
are blank, then it will not be stored in an output file.
+If the value to this option is zero, no checking is done.
+This check is only applied when the cropped region(s) are defined by their 
center (not by the vertices, see @ref{Crop modes}).
+
+The units of the value are interpreted based on the @option{--mode} value (in 
WCS or pixel units).
+The ultimate checked region size (in pixels) will be an odd integer around the 
center (converted from WCS, or when an even number of pixels are given to this 
option).
+In WCS mode, the value can be given as fractions, for example if the WCS units 
are in degrees, @code{0.1/3600} will correspond to a check size of 0.1 
arcseconds.
+
+Because survey regions don't often have a clean square or rectangle shape, 
some of the pixels on the sides of the survey FITS image don't commonly have 
any data and are blank (see @ref{Blank pixels}).
+So when the catalog was not generated from the input image, it often happens 
that the image does not have data over some of the points.
+
+When the given center of a crop falls in such regions or outside the dataset, 
and this option has a non-zero value, no crop will be created.
+Therefore with this option, you can specify a width of a small box (3 pixels 
is often good enough) around the central pixel of the cropped image.
+You can check which crops were created and which weren't from the command-line 
(if @option{--quiet} was not called, see @ref{Operating mode options}), or in 
Crop's log file (see @ref{Crop output}).
 
 @item -p STR
 @itemx --suffix=STR
-The suffix (or post-fix) of the output files for when you want all the
-cropped images to have a special ending. One case where this might be
-helpful is when besides the science images, you want the weight images (or
-exposure maps, which are also distributed with survey images) of the
-cropped regions too. So in one run, you can set the input images to the
-science images and @option{--suffix=_s.fits}. In the next run you can set
-the weight images as input and @option{--suffix=_w.fits}.
+The suffix (or post-fix) of the output files for when you want all the cropped 
images to have a special ending.
+One case where this might be helpful is when besides the science images, you 
want the weight images (or exposure maps, which are also distributed with 
survey images) of the cropped regions too.
+So in one run, you can set the input images to the science images and 
@option{--suffix=_s.fits}.
+In the next run you can set the weight images as input and 
@option{--suffix=_w.fits}.
 
 @item -b
 @itemx --noblank
-Pixels outside of the input image that are in the crop box will not be
-used. By default they are filled with blank values (depending on type), see
-@ref{Blank pixels}. This option only applies only in Image mode, see
-@ref{Crop modes}.
+Pixels outside of the input image that are in the crop box will not be used.
+By default they are filled with blank values (depending on type), see 
@ref{Blank pixels}.
+This option only applies only in Image mode, see @ref{Crop modes}.
 
 @item -z
 @itemx --zeroisnotblank
-In float or double images, it is common to give the value of zero to
-blank pixels. If the input image type is one of these two types, such
-pixels will also be considered as blank. You can disable this behavior
-with this option, see @ref{Blank pixels}.
+In float or double images, it is common to give the value of zero to blank 
pixels.
+If the input image type is one of these two types, such pixels will also be 
considered as blank.
+You can disable this behavior with this option, see @ref{Blank pixels}.
 
 @end table
 
@@ -12713,9 +9466,8 @@ Operating mode options:
 
 @item -O STR
 @itemx --mode=STR
-Operate in Image mode or WCS mode when the input coordinates can be both
-image or WCS. The value must either be @option{img} or @option{wcs}, see
-@ref{Crop modes} for a full description.
+Operate in Image mode or WCS mode when the input coordinates can be both image 
or WCS.
+The value must either be @option{img} or @option{wcs}, see @ref{Crop modes} 
for a full description.
 
 @end table
 
@@ -12731,42 +9483,28 @@ on how many crops were requested, see @ref{Crop modes}:
 
 @itemize
 @item
-When a catalog is given, the value of the @option{--output} (see
-@ref{Common options}) will be read as the directory to store the output
-cropped images. Hence if it doesn't already exist, Crop will abort with an
-error of a ``No such file or directory'' error.
+When a catalog is given, the value of the @option{--output} (see @ref{Common 
options}) will be read as the directory to store the output cropped images.
+Hence if it doesn't already exist, Crop will abort with an error of a ``No 
such file or directory'' error.
 
-The crop file names will consist of two parts: a variable part (the row
-number of each target starting from 1) along with a fixed string which you
-can set with the @option{--suffix} option. Optionally, you may also use the
-@option{--namecol} option to define a column in the input catalog to use as
-the file name instead of numbers.
+The crop file names will consist of two parts: a variable part (the row number 
of each target starting from 1) along with a fixed string which you can set 
with the @option{--suffix} option.
+Optionally, you may also use the @option{--namecol} option to define a column 
in the input catalog to use as the file name instead of numbers.
 
 @item
-When only one crop is desired, the value to @option{--output} will be read
-as a file name. If no output is specified or if it is a directory, the
-output file name will follow the automatic output names of Gnuastro, see
-@ref{Automatic output}: The string given to @option{--suffix} will be
-replaced with the @file{.fits} suffix of the input.
+When only one crop is desired, the value to @option{--output} will be read as 
a file name.
+If no output is specified or if it is a directory, the output file name will 
follow the automatic output names of Gnuastro, see @ref{Automatic output}: The 
string given to @option{--suffix} will be replaced with the @file{.fits} suffix 
of the input.
 @end itemize
 
-The header of each output cropped image will contain the names of the input
-image(s) it was cut from. If a name is longer than the 70 character space
-that the FITS standard allows for header keyword values, the name will be
-cut into several keywords from the nearest slash (@key{/}). The keywords
-have the following format: @command{ICFn_m} (for Crop File). Where
-@command{n} is the number of the image used in this crop and @command{m} is
-the part of the name (it can be broken into multiple keywords). Following
-the name is another keyword named @command{ICFnPIX} which shows the pixel
-range from that input image in the same syntax as @ref{Crop section
-syntax}. So this string can be directly given to the @option{--section}
-option later.
-
-Once done, a log file can be created in the current directory with the
-@code{--log} option. This file will have three columns and the same number
-of rows as the number of cropped images. There are also comments on the top
-of the log file explaining basic information about the run and descriptions
-for the columns. A short description of the columns is also given below:
+The header of each output cropped image will contain the names of the input 
image(s) it was cut from.
+If a name is longer than the 70 character space that the FITS standard allows 
for header keyword values, the name will be cut into several keywords from the 
nearest slash (@key{/}).
+The keywords have the following format: @command{ICFn_m} (for Crop File).
+Where @command{n} is the number of the image used in this crop and @command{m} 
is the part of the name (it can be broken into multiple keywords).
+Following the name is another keyword named @command{ICFnPIX} which shows the 
pixel range from that input image in the same syntax as @ref{Crop section 
syntax}.
+So this string can be directly given to the @option{--section} option later.
+
+Once done, a log file can be created in the current directory with the 
@code{--log} option.
+This file will have three columns and the same number of rows as the number of 
cropped images.
+There are also comments on the top of the log file explaining basic 
information about the run and descriptions for the columns.
+A short description of the columns is also given below:
 
 @enumerate
 @item
@@ -12774,11 +9512,8 @@ The cropped image file name for that row.
 @item
 The number of input images that were used to create that image.
 @item
-A @code{0} if the central few pixels (value to the @option{--checkcenter}
-option) are blank and @code{1} if they aren't. When the crop was not
-defined by its center (see @ref{Crop modes}), or @option{--checkcenter} was
-given a value of 0 (see @ref{Invoking astcrop}), the center will not be
-checked and this column will be given a value of @code{-1}.
+A @code{0} if the central few pixels (value to the @option{--checkcenter} 
option) are blank and @code{1} if they aren't.
+When the crop was not defined by its center (see @ref{Crop modes}), or 
@option{--checkcenter} was given a value of 0 (see @ref{Invoking astcrop}), the 
center will not be checked and this column will be given a value of @code{-1}.
 @end enumerate
 
 
@@ -12801,18 +9536,12 @@ checked and this column will be given a value of 
@code{-1}.
 @node Arithmetic, Convolve, Crop, Data manipulation
 @section Arithmetic
 
-It is commonly necessary to do operations on some or all of the elements of
-a dataset independently (pixels in an image). For example, in the reduction
-of raw data it is necessary to subtract the Sky value (@ref{Sky value})
-from each image image. Later (once the images as warped into a single grid
-using Warp for example, see @ref{Warp}), the images are co-added (the
-output pixel grid is the average of the pixels of the individual input
-images). Arithmetic is Gnuastro's program for such operations on your
-datasets directly from the command-line. It currently uses the reverse
-polish or post-fix notation, see @ref{Reverse polish notation} and will work
-on the native data types of the input images/data to reduce CPU and RAM
-resources, see @ref{Numeric data types}. For more information on how to run
-Arithmetic, please see @ref{Invoking astarithmetic}.
+It is commonly necessary to do operations on some or all of the elements of a 
dataset independently (pixels in an image).
+For example, in the reduction of raw data it is necessary to subtract the Sky 
value (@ref{Sky value}) from each image image.
+Later (once the images as warped into a single grid using Warp for example, 
see @ref{Warp}), the images are co-added (the output pixel grid is the average 
of the pixels of the individual input images).
+Arithmetic is Gnuastro's program for such operations on your datasets directly 
from the command-line.
+It currently uses the reverse polish or post-fix notation, see @ref{Reverse 
polish notation} and will work on the native data types of the input 
images/data to reduce CPU and RAM resources, see @ref{Numeric data types}.
+For more information on how to run Arithmetic, please see @ref{Invoking 
astarithmetic}.
 
 
 @menu
@@ -12826,63 +9555,39 @@ Arithmetic, please see @ref{Invoking astarithmetic}.
 
 @cindex Post-fix notation
 @cindex Reverse Polish Notation
-The most common notation for arithmetic operations is the
-@url{https://en.wikipedia.org/wiki/Infix_notation, infix notation} where
-the operator goes between the two operands, for example @mymath{4+5}. While
-the infix notation is the preferred way in most programming languages,
-currently the Gnuastro's program (in particular Arithmetic and Table, when
-doing column arithmetic) do not use it. This is because it will require
-parenthesis which can complicate the implementation of the code. In the
-near future we do plan to also allow this
-notation@footnote{@url{https://savannah.gnu.org/task/index.php?13867}}, but
-for the time being (due to time constraints on the developers), arithmetic
-operations can only be done in the post-fix notation (also known as
-@url{https://en.wikipedia.org/wiki/Reverse_Polish_notation, reverse polish
-notation}). The Wikipedia article provides some excellent explanation on
-this notation but here we will give a short summary here for
-self-sufficiency.
-
-In the post-fix notation, the operator is placed after the operands, as we
-will see below this removes the need to define parenthesis for most
-ordinary operators. For example, instead of writing @command{5+6}, we write
-@command{5 6 +}. To easily understand how this notation works, you can
-think of each operand as a node in a ``last-in-first-out'' stack. Every
-time an operator is confronted, the operator pops the number of operands it
-needs from the top of the stack (so they don't exist in the stack any
-more), does its operation and pushes the result back on top of the
-stack. So if you want the average of 5 and 6, you would write: @command{5 6
-+ 2 /}. The operations that are done are:
+The most common notation for arithmetic operations is the 
@url{https://en.wikipedia.org/wiki/Infix_notation, infix notation} where the 
operator goes between the two operands, for example @mymath{4+5}.
+While the infix notation is the preferred way in most programming languages, 
currently the Gnuastro's program (in particular Arithmetic and Table, when 
doing column arithmetic) do not use it.
+This is because it will require parenthesis which can complicate the 
implementation of the code.
+In the near future we do plan to also allow this 
notation@footnote{@url{https://savannah.gnu.org/task/index.php?13867}}, but for 
the time being (due to time constraints on the developers), arithmetic 
operations can only be done in the post-fix notation (also known as 
@url{https://en.wikipedia.org/wiki/Reverse_Polish_notation, reverse polish 
notation}).
+The Wikipedia article provides some excellent explanation on this notation but 
here we will give a short summary here for self-sufficiency.
+
+In the post-fix notation, the operator is placed after the operands, as we 
will see below this removes the need to define parenthesis for most ordinary 
operators.
+For example, instead of writing @command{5+6}, we write @command{5 6 +}.
+To easily understand how this notation works, you can think of each operand as 
a node in a ``last-in-first-out'' stack.
+Every time an operator is confronted, the operator pops the number of operands 
it needs from the top of the stack (so they don't exist in the stack any more), 
does its operation and pushes the result back on top of the stack.
+So if you want the average of 5 and 6, you would write: @command{5 6 + 2 /}.
+The operations that are done are:
 
 @enumerate
 @item
-@command{5} is an operand, so it is pushed to the top of the stack (which
-is initially empty).
+@command{5} is an operand, so it is pushed to the top of the stack (which is 
initially empty).
 @item
 @command{6} is an operand, so it is pushed to the top of the stack.
 @item
-@command{+} is a @emph{binary} operator, so it will pop the top two
-elements of the stack out of it, and perform addition on them (the order is
-@mymath{5+6} in the example above). The result is @command{11} which is
-pushed to the top of the stack.
+@command{+} is a @emph{binary} operator, so it will pop the top two elements 
of the stack out of it, and perform addition on them (the order is @mymath{5+6} 
in the example above).
+The result is @command{11} which is pushed to the top of the stack.
 @item
 @command{2} is an operand so push it onto the top of the stack.
 @item
-@command{/} is a binary operator, so pull out the top two elements of
-the stack (top-most is @command{2}, then @command{11}) and divide the
-second one by the first.
+@command{/} is a binary operator, so pull out the top two elements of the 
stack (top-most is @command{2}, then @command{11}) and divide the second one by 
the first.
 @end enumerate
 
-In the Arithmetic program, the operands can be FITS images or numbers (see
-@ref{Invoking astarithmetic}). In Table's column arithmetic, they can be
-any column or a number (see @ref{Column arithmetic}).
+In the Arithmetic program, the operands can be FITS images or numbers (see 
@ref{Invoking astarithmetic}).
+In Table's column arithmetic, they can be any column or a number (see 
@ref{Column arithmetic}).
 
-With this notation, very complicated procedures can be created without the
-need for parenthesis or worrying about precedence. Even functions which
-take an arbitrary number of arguments can be defined in this notation. This
-is a very powerful notation and is used in languages like Postscript
-@footnote{See the EPS and PDF part of @ref{Recognized file formats} for a
-little more on the Postscript language.} which produces PDF files when
-compiled.
+With this notation, very complicated procedures can be created without the 
need for parenthesis or worrying about precedence.
+Even functions which take an arbitrary number of arguments can be defined in 
this notation.
+This is a very powerful notation and is used in languages like Postscript 
@footnote{See the EPS and PDF part of @ref{Recognized file formats} for a 
little more on the Postscript language.} which produces PDF files when compiled.
 
 
 
@@ -12891,14 +9596,11 @@ compiled.
 @node Arithmetic operators, Invoking astarithmetic, Reverse polish notation, 
Arithmetic
 @subsection Arithmetic operators
 
-The recognized operators in Arithmetic are listed below. See @ref{Reverse
-polish notation} for more on how the operators and operands should be
-ordered on the command-line. The operands to all operators can be a data
-array (for example a FITS image) or a number, the output will be an array
-or number according to the inputs. For example a number multiplied by an
-array will produce an array. The conditional operators will return pixel,
-or numerical values of 0 (false) or 1 (true) and stored in an
-@code{unsigned char} data type (see @ref{Numeric data types}).
+The recognized operators in Arithmetic are listed below.
+See @ref{Reverse polish notation} for more on how the operators and operands 
should be ordered on the command-line.
+The operands to all operators can be a data array (for example a FITS image) 
or a number, the output will be an array or number according to the inputs.
+For example a number multiplied by an array will produce an array.
+The conditional operators will return pixel, or numerical values of 0 (false) 
or 1 (true) and stored in an @code{unsigned char} data type (see @ref{Numeric 
data types}).
 
 @table @command
 
@@ -12909,98 +9611,72 @@ Addition, so ``@command{4 5 +}'' is equivalent to 
@mymath{4+5}.
 Subtraction, so ``@command{4 5 -}'' is equivalent to @mymath{4-5}.
 
 @item x
-Multiplication, so ``@command{4 5 x}'' is equivalent to
-@mymath{4\times5}.
+Multiplication, so ``@command{4 5 x}'' is equivalent to @mymath{4\times5}.
 
 @item /
 Division, so ``@command{4 5 /}'' is equivalent to @mymath{4/5}.
 
 @item %
-Modulo (remainder), so ``@command{3 2 %}'' is equivalent to
-@mymath{1}. Note that the modulo operator only works on integer types.
+Modulo (remainder), so ``@command{3 2 %}'' is equivalent to @mymath{1}.
+Note that the modulo operator only works on integer types.
 
 @item abs
-Absolute value of first operand, so ``@command{4 abs}'' is
-equivalent to @mymath{|4|}.
+Absolute value of first operand, so ``@command{4 abs}'' is equivalent to 
@mymath{|4|}.
 
 @item pow
-First operand to the power of the second, so ``@command{4.3 5f pow}'' is
-equivalent to @mymath{4.3^{5}}. Currently @code{pow} will only work on
-single or double precision floating point numbers or images. To be sure
-that a number is read as a floating point (even if it doesn't have any
-non-zero decimals) put an @code{f} after it.
+First operand to the power of the second, so ``@command{4.3 5f pow}'' is 
equivalent to @mymath{4.3^{5}}.
+Currently @code{pow} will only work on single or double precision floating 
point numbers or images.
+To be sure that a number is read as a floating point (even if it doesn't have 
any non-zero decimals) put an @code{f} after it.
 
 @item sqrt
-The square root of the first operand, so ``@command{5 sqrt}'' is equivalent
-to @mymath{\sqrt{5}}. The output will have a floating point type, but its
-precision is determined from the input: if the input is a 64-bit floating
-point, the output will also be 64-bit. Otherwise, the output will be 32-bit
-floating point (see @ref{Numeric data types} for the respective
-precision). Therefore if you require 64-bit precision in estimating the
-square root, convert the input to 64-bit floating point first, for example
-with @code{5 float64 sqrt}.
+The square root of the first operand, so ``@command{5 sqrt}'' is equivalent to 
@mymath{\sqrt{5}}.
+The output will have a floating point type, but its precision is determined 
from the input: if the input is a 64-bit floating point, the output will also 
be 64-bit.
+Otherwise, the output will be 32-bit floating point (see @ref{Numeric data 
types} for the respective precision).
+Therefore if you require 64-bit precision in estimating the square root, 
convert the input to 64-bit floating point first, for example with @code{5 
float64 sqrt}.
 
 @item log
-Natural logarithm of first operand, so ``@command{4 log}'' is equivalent to
-@mymath{ln(4)}. The output type is determined from the input, see the
-explanation under @command{sqrt} for more.
+Natural logarithm of first operand, so ``@command{4 log}'' is equivalent to 
@mymath{ln(4)}.
+The output type is determined from the input, see the explanation under 
@command{sqrt} for more.
 
 @item log10
-Base-10 logarithm of first operand, so ``@command{4 log10}'' is equivalent
-to @mymath{\log(4)}. The output type is determined from the input, see the
-explanation under @command{sqrt} for more.
+Base-10 logarithm of first operand, so ``@command{4 log10}'' is equivalent to 
@mymath{\log(4)}.
+The output type is determined from the input, see the explanation under 
@command{sqrt} for more.
 
 @item minvalue
-Minimum (non-blank) value in the top operand on the stack, so
-``@command{a.fits minvalue}'' will push the the minimum pixel value in this
-image onto the stack. Therefore this operator is mainly intended for data
-(for example images), if the top operand is a number, this operator just
-returns it without any change. So note that when this operator acts on a
-single image, the output will no longer be an image, but a number. The
-output of this operand is in the same type as the input.
+Minimum (non-blank) value in the top operand on the stack, so 
``@command{a.fits minvalue}'' will push the minimum pixel value in this image 
onto the stack.
+Therefore this operator is mainly intended for data (for example images), if 
the top operand is a number, this operator just returns it without any change.
+So note that when this operator acts on a single image, the output will no 
longer be an image, but a number.
+The output of this operand is in the same type as the input.
 
 @item maxvalue
-Maximum (non-blank) value of first operand in the same type, similar to
-@command{minvalue}.
+Maximum (non-blank) value of first operand in the same type, similar to 
@command{minvalue}.
 
 @item numbervalue
-Number of non-blank elements in first operand in the @code{uint64} type,
-similar to @command{minvalue}.
+Number of non-blank elements in first operand in the @code{uint64} type, 
similar to @command{minvalue}.
 
 @item sumvalue
-Sum of non-blank elements in first operand in the @code{float32} type,
-similar to @command{minvalue}.
+Sum of non-blank elements in first operand in the @code{float32} type, similar 
to @command{minvalue}.
 
 @item meanvalue
-Mean value of non-blank elements in first operand in the @code{float32}
-type, similar to @command{minvalue}.
+Mean value of non-blank elements in first operand in the @code{float32} type, 
similar to @command{minvalue}.
 
 @item stdvalue
-Standard deviation of non-blank elements in first operand in the
-@code{float32} type, similar to @command{minvalue}.
+Standard deviation of non-blank elements in first operand in the 
@code{float32} type, similar to @command{minvalue}.
 
 @item medianvalue
-Median of non-blank elements in first operand with the same type, similar
-to @command{minvalue}.
+Median of non-blank elements in first operand with the same type, similar to 
@command{minvalue}.
 
 @cindex NaN
 @item min
-For each pixel, find the minimum value in all given datasets. The output
-will have the same type as the input.
+For each pixel, find the minimum value in all given datasets.
+The output will have the same type as the input.
 
-The first popped operand to this operator must be a positive integer number
-which specifies how many further operands should be popped from the
-stack. All the subsequently popped operands must have the same type and
-size. This operator (and all the variable-operand operators similar to it
-that are discussed below) will work in multi-threaded mode unless
-Arithmetic is called with the @option{--numthreads=1} option, see
-@ref{Multi-threaded operations}.
+The first popped operand to this operator must be a positive integer number 
which specifies how many further operands should be popped from the stack.
+All the subsequently popped operands must have the same type and size.
+This operator (and all the variable-operand operators similar to it that are 
discussed below) will work in multi-threaded mode unless Arithmetic is called 
with the @option{--numthreads=1} option, see @ref{Multi-threaded operations}.
 
-Each pixel of the output of the @code{min} operator will be given the
-minimum value of the same pixel from all the popped operands/images. For
-example the following command will produce an image with the same size and
-type as the three inputs, but each output pixel value will be the minimum
-of the same pixel's values in all three input images.
+Each pixel of the output of the @code{min} operator will be given the minimum 
value of the same pixel from all the popped operands/images.
+For example the following command will produce an image with the same size and 
type as the three inputs, but each output pixel value will be the minimum of 
the same pixel's values in all three input images.
 
 @example
 $ astarithmetic a.fits b.fits c.fits 3 min
@@ -13013,200 +9689,146 @@ Important notes:
 NaN/blank pixels will be ignored, see @ref{Blank pixels}.
 
 @item
-The output will have the same type as the inputs. This is natural for the
-@command{min} and @command{max} operators, but for other similar operators
-(for example @command{sum}, or @command{average}) the per-pixel operations
-will be done in double precision floating point and then stored back in the
-input type. Therefore, if the input was an integer, C's internal type
-conversion will be used.
+The output will have the same type as the inputs.
+This is natural for the @command{min} and @command{max} operators, but for 
other similar operators (for example @command{sum}, or @command{average}) the 
per-pixel operations will be done in double precision floating point and then 
stored back in the input type.
+Therefore, if the input was an integer, C's internal type conversion will be 
used.
 
 @end itemize
 
 @item max
-For each pixel, find the maximum value in all given datasets. The output
-will have the same type as the input. This operator is called similar to
-the @command{min} operator, please see there for more.
+For each pixel, find the maximum value in all given datasets.
+The output will have the same type as the input.
+This operator is called similar to the @command{min} operator, please see 
there for more.
 
 @item number
-For each pixel count the number of non-blank pixels in all given
-datasets. The output will be an unsigned 32-bit integer datatype (see
-@ref{Numeric data types}). This operator is called similar to the
-@command{min} operator, please see there for more.
+For each pixel count the number of non-blank pixels in all given datasets.
+The output will be an unsigned 32-bit integer datatype (see @ref{Numeric data 
types}).
+This operator is called similar to the @command{min} operator, please see 
there for more.
 
 @item sum
-For each pixel, calculate the sum in all given datasets. The output will
-have the a single-precision (32-bit) floating point type. This operator is
-called similar to the @command{min} operator, please see there for more.
+For each pixel, calculate the sum in all given datasets.
+The output will have the a single-precision (32-bit) floating point type.
+This operator is called similar to the @command{min} operator, please see 
there for more.
 
 @item mean
-For each pixel, calculate the mean in all given datasets. The output will
-have the a single-precision (32-bit) floating point type. This operator is
-called similar to the @command{min} operator, please see there for more.
+For each pixel, calculate the mean in all given datasets.
+The output will have the a single-precision (32-bit) floating point type.
+This operator is called similar to the @command{min} operator, please see 
there for more.
 
 @item std
-For each pixel, find the standard deviation in all given datasets. The
-output will have the a single-precision (32-bit) floating point type. This
-operator is called similar to the @command{min} operator, please see there
-for more.
+For each pixel, find the standard deviation in all given datasets.
+The output will have the a single-precision (32-bit) floating point type.
+This operator is called similar to the @command{min} operator, please see 
there for more.
 
 @item median
-For each pixel, find the median in all given datasets. The output will have
-the a single-precision (32-bit) floating point type. This operator is
-called similar to the @command{min} operator, please see there for more.
+For each pixel, find the median in all given datasets.
+The output will have the a single-precision (32-bit) floating point type.
+This operator is called similar to the @command{min} operator, please see 
there for more.
 
 @item sigclip-number
-For each pixel, find the sigma-clipped number (after removing outliers) in
-all given datasets. The output will have the an unsigned 32-bit integer
-type (see @ref{Numeric data types}).
-
-This operator will combine the specified number of inputs into a single
-output that contains the number of remaining elements after
-@mymath{\sigma}-clipping on each element/pixel (for more on
-@mymath{\sigma}-clipping, see @ref{Sigma clipping}). This operator is very
-similar to @command{min}, with the exception that it expects two operands
-(parameters for sigma-clipping) before the total number of inputs. The
-first popped operand is the termination criteria and the second is the
-multiple of @mymath{\sigma}.
-
-For example in the command below, the first popped operand (@command{0.2})
-is the sigma clipping termination criteria. If the termination criteria is
-larger than 1 it is interpreted as the number of clips to do. But if it is
-between 0 and 1, then it is the tolerance level on the standard deviation
-(see @ref{Sigma clipping}). The second popped operand (@command{5}) is the
-multiple of sigma to use in sigma-clipping. The third popped operand
-(@command{10}) is number of datasets that will be used (similar to the
-first popped operand to @command{min}).
+For each pixel, find the sigma-clipped number (after removing outliers) in all 
given datasets.
+The output will have the an unsigned 32-bit integer type (see @ref{Numeric 
data types}).
+
+This operator will combine the specified number of inputs into a single output 
that contains the number of remaining elements after @mymath{\sigma}-clipping 
on each element/pixel (for more on @mymath{\sigma}-clipping, see @ref{Sigma 
clipping}).
+This operator is very similar to @command{min}, with the exception that it 
expects two operands (parameters for sigma-clipping) before the total number of 
inputs.
+The first popped operand is the termination criteria and the second is the 
multiple of @mymath{\sigma}.
+
+For example in the command below, the first popped operand (@command{0.2}) is 
the sigma clipping termination criteria.
+If the termination criteria is larger than 1 it is interpreted as the number 
of clips to do.
+But if it is between 0 and 1, then it is the tolerance level on the standard 
deviation (see @ref{Sigma clipping}).
+The second popped operand (@command{5}) is the multiple of sigma to use in 
sigma-clipping.
+The third popped operand (@command{10}) is number of datasets that will be 
used (similar to the first popped operand to @command{min}).
 
 @example
 astarithmetic a.fits b.fits c.fits 3 5 0.2 sigclip-number
 @end example
 
 @item sigclip-median
-For each pixel, find the sigma-clipped median in all given datasets. The
-output will have the a single-precision (32-bit) floating point type. This
-operator is called similar to the @command{sigclip-number} operator, please
-see there for more.
+For each pixel, find the sigma-clipped median in all given datasets.
+The output will have the a single-precision (32-bit) floating point type.
+This operator is called similar to the @command{sigclip-number} operator, 
please see there for more.
 
 @item sigclip-mean
-For each pixel, find the sigma-clipped mean in all given datasets. The
-output will have the a single-precision (32-bit) floating point type. This
-operator is called similar to the @command{sigclip-number} operator, please
-see there for more.
+For each pixel, find the sigma-clipped mean in all given datasets.
+The output will have the a single-precision (32-bit) floating point type.
+This operator is called similar to the @command{sigclip-number} operator, 
please see there for more.
 
 @item sigclip-std
-For each pixel, find the sigma-clipped standard deviation in all given
-datasets. The output will have the a single-precision (32-bit) floating
-point type. This operator is called similar to the @command{sigclip-number}
-operator, please see there for more.
+For each pixel, find the sigma-clipped standard deviation in all given 
datasets.
+The output will have the a single-precision (32-bit) floating point type.
+This operator is called similar to the @command{sigclip-number} operator, 
please see there for more.
 
 @item filter-mean
-Apply mean filtering (or @url{https://en.wikipedia.org/wiki/Moving_average,
-moving average}) on the input dataset. During mean filtering, each pixel
-(data element) is replaced by the mean value of all its surrounding pixels
-(excluding blank values). The number of surrounding pixels in each
-dimension (to calculate the mean) is determined through the earlier
-operands that have been pushed onto the stack prior to the input
-dataset. The number of necessary operands is determined by the dimensions
-of the input dataset (first popped operand). The order of the dimensions on
-the command-line is the order in FITS format. Here is one example:
+Apply mean filtering (or @url{https://en.wikipedia.org/wiki/Moving_average, 
moving average}) on the input dataset.
+During mean filtering, each pixel (data element) is replaced by the mean value 
of all its surrounding pixels (excluding blank values).
+The number of surrounding pixels in each dimension (to calculate the mean) is 
determined through the earlier operands that have been pushed onto the stack 
prior to the input dataset.
+The number of necessary operands is determined by the dimensions of the input 
dataset (first popped operand).
+The order of the dimensions on the command-line is the order in FITS format.
+Here is one example:
 
 @example
 $ astarithmetic 5 4 image.fits filter-mean
 @end example
 
 @noindent
-In this example, each pixel is replaced by the mean of a 5 by 4 box around
-it. The box is 5 pixels along the first FITS dimension (horizontal when
-viewed in ds9) and 4 pixels along the second FITS dimension (vertical).
+In this example, each pixel is replaced by the mean of a 5 by 4 box around it.
+The box is 5 pixels along the first FITS dimension (horizontal when viewed in 
ds9) and 4 pixels along the second FITS dimension (vertical).
 
-Each pixel will be placed in the center of the box that the mean is
-calculated on. If the given width along a dimension is even, then the
-center is assumed to be between the pixels (not in the center of a
-pixel). When the pixel is close to the edge, the pixels of the box that
-fall outside the image are ignored. Therefore, on the edge, less points
-will be used in calculating the mean.
+Each pixel will be placed in the center of the box that the mean is calculated 
on.
+If the given width along a dimension is even, then the center is assumed to be 
between the pixels (not in the center of a pixel).
+When the pixel is close to the edge, the pixels of the box that fall outside 
the image are ignored.
+Therefore, on the edge, less points will be used in calculating the mean.
 
-The final effect of mean filtering is to smooth the input image, it is
-essentially a convolution with a kernel that has identical values for all
-its pixels (is flat), see @ref{Convolution process}.
+The final effect of mean filtering is to smooth the input image, it is 
essentially a convolution with a kernel that has identical values for all its 
pixels (is flat), see @ref{Convolution process}.
 
-Note that blank pixels will also be affected by this operator: if there are
-any non-blank elements in the box surrounding a blank pixel, in the
-filtered image, it will have the mean of the non-blank elements, therefore
-it won't be blank any more. If blank elements are important for your
-analysis, you can use the @code{isblank} with the @code{where} operator to
-set them back to blank after filtering.
+Note that blank pixels will also be affected by this operator: if there are 
any non-blank elements in the box surrounding a blank pixel, in the filtered 
image, it will have the mean of the non-blank elements, therefore it won't be 
blank any more.
+If blank elements are important for your analysis, you can use the 
@code{isblank} with the @code{where} operator to set them back to blank after 
filtering.
 
 @item filter-median
-Apply @url{https://en.wikipedia.org/wiki/Median_filter, median filtering}
-on the input dataset. This is very similar to @command{filter-mean}, except
-that instead of the mean value of the box pixels, the median value is used
-to replace a pixel value. For more on how to use this operator, please see
-@command{filter-mean}.
+Apply @url{https://en.wikipedia.org/wiki/Median_filter, median filtering} on 
the input dataset.
+This is very similar to @command{filter-mean}, except that instead of the mean 
value of the box pixels, the median value is used to replace a pixel value.
+For more on how to use this operator, please see @command{filter-mean}.
 
-The median is less susceptible to outliers compared to the mean. As a
-result, after median filtering, the pixel values will be more discontinuous
-than mean filtering.
+The median is less susceptible to outliers compared to the mean.
+As a result, after median filtering, the pixel values will be more 
discontinuous than mean filtering.
 
 @item filter-sigclip-mean
-Apply a @mymath{\sigma}-clipped mean filtering onto the input dataset. This
-is very similar to @code{filter-mean}, except that all outliers (identified
-by the @mymath{\sigma}-clipping algorithm) have been removed, see
-@ref{Sigma clipping} for more on the basics of this algorithm. As described
-there, two extra input parameters are necessary for
-@mymath{\sigma}-clipping: the multiple of @mymath{\sigma} and the
-termination criteria. @code{filter-sigclip-mean} therefore needs to pop two
-other operands from the stack after the dimensions of the box.
-
-For example the line below uses the same box size as the example of
-@code{filter-mean}. However, all elements in the box that are iteratively
-beyond @mymath{3\sigma} of the distribution's median are removed from the
-final calculation of the mean until the change in @mymath{\sigma} is less
-than @mymath{0.2}.
+Apply a @mymath{\sigma}-clipped mean filtering onto the input dataset.
+This is very similar to @code{filter-mean}, except that all outliers 
(identified by the @mymath{\sigma}-clipping algorithm) have been removed, see 
@ref{Sigma clipping} for more on the basics of this algorithm.
+As described there, two extra input parameters are necessary for 
@mymath{\sigma}-clipping: the multiple of @mymath{\sigma} and the termination 
criteria.
+@code{filter-sigclip-mean} therefore needs to pop two other operands from the 
stack after the dimensions of the box.
+
+For example the line below uses the same box size as the example of 
@code{filter-mean}.
+However, all elements in the box that are iteratively beyond @mymath{3\sigma} 
of the distribution's median are removed from the final calculation of the mean 
until the change in @mymath{\sigma} is less than @mymath{0.2}.
 
 @example
 $ astarithmetic 3 0.2 5 4 image.fits filter-sigclip-mean
 @end example
 
-The median (which needs a sorted dataset) is necessary for
-@mymath{\sigma}-clipping, therefore @code{filter-sigclip-mean} can be
-significantly slower than @code{filter-mean}. However, if there are strong
-outliers in the dataset that you want to ignore (for example emission lines
-on a spectrum when finding the continuum), this is a much better solution.
+The median (which needs a sorted dataset) is necessary for 
@mymath{\sigma}-clipping, therefore @code{filter-sigclip-mean} can be 
significantly slower than @code{filter-mean}.
+However, if there are strong outliers in the dataset that you want to ignore 
(for example emission lines on a spectrum when finding the continuum), this is 
a much better solution.
 
 @item filter-sigclip-median
-Apply a @mymath{\sigma}-clipped median filtering onto the input
-dataset. This operator and its necessary operands are almost identical to
-@code{filter-sigclip-mean}, except that after @mymath{\sigma}-clipping, the
-median value (which is less affected by outliers than the mean) is added
-back to the stack.
+Apply a @mymath{\sigma}-clipped median filtering onto the input dataset.
+This operator and its necessary operands are almost identical to 
@code{filter-sigclip-mean}, except that after @mymath{\sigma}-clipping, the 
median value (which is less affected by outliers than the mean) is added back 
to the stack.
 
 @item interpolate-medianngb
-Interpolate all the blank elements of the second popped operand with the
-median of its nearest non-blank neighbors. The number of the nearest
-non-blank neighbors used to calculate the median is given by the first
-popped operand. Note that the distance of the nearest non-blank neighbors
-is irrelevant in this interpolation.
+Interpolate all the blank elements of the second popped operand with the 
median of its nearest non-blank neighbors.
+The number of the nearest non-blank neighbors used to calculate the median is 
given by the first popped operand.
+Note that the distance of the nearest non-blank neighbors is irrelevant in 
this interpolation.
 
 @item collapse-sum
-Collapse the given dataset (second popped operand), by summing all elements
-along the first popped operand (a dimension in FITS standard: counting from
-one, from fastest dimension). The returned dataset has one dimension less
-compared to the input.
-
-The output will have a double-precision floating point type irrespective of
-the input dataset's type. Doing the operation in double-precision (64-bit)
-floating point will help the collapse (summation) be affected less by
-floating point errors. But afterwards, single-precision floating points are
-usually enough in real (noisy) datasets. So depending on the type of the
-input and its nature, it is recommended to use one of the type conversion
-operators on the returned dataset.
+Collapse the given dataset (second popped operand), by summing all elements 
along the first popped operand (a dimension in FITS standard: counting from 
one, from fastest dimension).
+The returned dataset has one dimension less compared to the input.
+
+The output will have a double-precision floating point type irrespective of 
the input dataset's type.
+Doing the operation in double-precision (64-bit) floating point will help the 
collapse (summation) be affected less by floating point errors.
+But afterwards, single-precision floating points are usually enough in real 
(noisy) datasets.
+So depending on the type of the input and its nature, it is recommended to use 
one of the type conversion operators on the returned dataset.
 
 @cindex World Coordinate System (WCS)
-If any WCS is present, the returned dataset will also lack the respective
-dimension in its WCS matrix. Therefore, when the WCS is important for later
-processing, be sure that the input is aligned with the respective axises:
-all non-diagonal elements in the WCS matrix are zero.
+If any WCS is present, the returned dataset will also lack the respective 
dimension in its WCS matrix.
+Therefore, when the WCS is important for later processing, be sure that the 
input is aligned with the respective axes: all non-diagonal elements in the WCS 
matrix are zero.
 
 @cindex Data cubes
 @cindex 3D data-cubes
@@ -13214,437 +9836,322 @@ all non-diagonal elements in the WCS matrix are zero.
 @cindex Narrow-band image
 @cindex IFU: Integral Field Unit
 @cindex Integral field unit (IFU)
-One common application of this operator is the creation of pseudo
-broad-band or narrow-band 2D images from 3D data cubes. For example
-integral field unit (IFU) data products that have two spatial dimensions
-(first two FITS dimensions) and one spectral dimension (third FITS
-dimension). The command below will collapse the whole third dimension into
-a 2D array the size of the first two dimensions, and then convert the
-output to single-precision floating point (as discussed above).
+One common application of this operator is the creation of pseudo broad-band 
or narrow-band 2D images from 3D data cubes.
+For example integral field unit (IFU) data products that have two spatial 
dimensions (first two FITS dimensions) and one spectral dimension (third FITS 
dimension).
+The command below will collapse the whole third dimension into a 2D array the 
size of the first two dimensions, and then convert the output to 
single-precision floating point (as discussed above).
 
 @example
 $ astarithmetic cube.fits 3 collapse-sum float32
 @end example
 
 @item collapse-mean
-Similar to @option{collapse-sum}, but the returned dataset will be the mean
-value along the collapsed dimension, not the sum.
+Similar to @option{collapse-sum}, but the returned dataset will be the mean 
value along the collapsed dimension, not the sum.
 
 @item collapse-number
-Similar to @option{collapse-sum}, but the returned dataset will be the
-number of non-blank values along the collapsed dimension. The output will
-have a 32-bit signed integer type. If the input dataset doesn't have blank
-values, all the elements in the returned dataset will have a single value
-(the length of the collapsed dimension). Therefore this is mostly relevant
-when there are blank values in the dataset.
+Similar to @option{collapse-sum}, but the returned dataset will be the number 
of non-blank values along the collapsed dimension.
+The output will have a 32-bit signed integer type.
+If the input dataset doesn't have blank values, all the elements in the 
returned dataset will have a single value (the length of the collapsed 
dimension).
+Therefore this is mostly relevant when there are blank values in the dataset.
 
 @item collapse-min
-Similar to @option{collapse-sum}, but the returned dataset will have the
-same numeric type as the input and will contain the minimum value for each
-pixel along the collapsed dimension.
+Similar to @option{collapse-sum}, but the returned dataset will have the same 
numeric type as the input and will contain the minimum value for each pixel 
along the collapsed dimension.
 
 @item collapse-max
-Similar to @option{collapse-sum}, but the returned dataset will have the
-same numeric type as the input and will contain the maximum value for each
-pixel along the collapsed dimension.
+Similar to @option{collapse-sum}, but the returned dataset will have the same 
numeric type as the input and will contain the maximum value for each pixel 
along the collapsed dimension.
 
 @item unique
-Remove all duplicate (and blank) elements from the first popped
-operand. The unique elements of the dataset will be stored in a
-single-dimensional dataset.
+Remove all duplicate (and blank) elements from the first popped operand.
+The unique elements of the dataset will be stored in a single-dimensional 
dataset.
 
-Recall that by default, single-dimensional datasets are stored as a table
-column in the output. But you can use @option{--onedasimage} to store them
-as a single-dimensional FITS array/image.
+Recall that by default, single-dimensional datasets are stored as a table 
column in the output.
+But you can use @option{--onedasimage} or @option{--onedonstdout} to 
respectively store them as a single-dimensional FITS array/image, or to print 
them on the standard output.
 
 @item erode
 @cindex Erosion
-Erode the foreground pixels (with value @code{1}) of the input dataset
-(second popped operand). The first popped operand is the connectivity (see
-description in @command{connected-components}). Erosion is simply a
-flipping of all foreground pixels (to background; with value @code{0}) that
-are ``touching'' background pixels. ``Touching'' is defined by the
-connectivity. In effect, this carves off the outer borders of the
-foreground, making them thinner. This operator assumes a binary dataset
-(all pixels are @code{0} and @code{1}).
+Erode the foreground pixels (with value @code{1}) of the input dataset (second 
popped operand).
+The first popped operand is the connectivity (see description in 
@command{connected-components}).
+Erosion is simply a flipping of all foreground pixels (to background; with 
value @code{0}) that are ``touching'' background pixels.
+``Touching'' is defined by the connectivity.
+In effect, this carves off the outer borders of the foreground, making them 
thinner.
+This operator assumes a binary dataset (all pixels are @code{0} and @code{1}).
 
 @item dilate
 @cindex Dilation
-Dilate the foreground pixels (with value @code{1}) of the input dataset
-(second popped operand). The first popped operand is the connectivity (see
-description in @command{connected-components}). Erosion is simply a
-flipping of all background pixels (with value @code{0}) to foreground that
-are ``touching'' foreground pixels. ``Touching'' is defined by the
-connectivity. In effect, this expands the outer borders of the
-foreground. This operator assumes a binary dataset (all pixels are @code{0}
-and @code{1}).
+Dilate the foreground pixels (with value @code{1}) of the input dataset 
(second popped operand).
+The first popped operand is the connectivity (see description in 
@command{connected-components}).
+Erosion is simply a flipping of all background pixels (with value @code{0}) to 
foreground that are ``touching'' foreground pixels.
+``Touching'' is defined by the connectivity.
+In effect, this expands the outer borders of the foreground.
+This operator assumes a binary dataset (all pixels are @code{0} and @code{1}).
 
 @item connected-components
 @cindex Connected components
-Find the connected components in the input dataset (second popped
-operand). The first popped is the connectivity used in the connected
-components algorithm. The second popped operand is the dataset where
-connected components are to be found. It is assumed to be a binary image
-(with values of 0 or 1). It must have an 8-bit unsigned integer type which
-is the format produced by conditional operators. This operator will return
-a labeled dataset where the non-zero pixels in the input will be labeled
-with a counter (starting from 1).
-
-The connectivity is a number between 1 and the number of dimensions in the
-dataset (inclusive). 1 corresponds to the weakest (symmetric) connectivity
-between elements and the number of dimensions the strongest. For example on
-a 2D image, a connectivity of 1 corresponds to 4-connected neighbors and 2
-corresponds to 8-connected neighbors.
-
-One example usage of this operator can be the identification of regions
-above a certain threshold, as in the command below. With this command,
-Arithmetic will first separate all pixels greater than 100 into a binary
-image (where pixels with a value of 1 are above that value). Afterwards, it
-will label all those that are connected.
+Find the connected components in the input dataset (second popped operand).
+The first popped is the connectivity used in the connected components 
algorithm.
+The second popped operand is the dataset where connected components are to be 
found.
+It is assumed to be a binary image (with values of 0 or 1).
+It must have an 8-bit unsigned integer type which is the format produced by 
conditional operators.
+This operator will return a labeled dataset where the non-zero pixels in the 
input will be labeled with a counter (starting from 1).
+
+The connectivity is a number between 1 and the number of dimensions in the 
dataset (inclusive).
+1 corresponds to the weakest (symmetric) connectivity between elements and the 
number of dimensions the strongest.
+For example on a 2D image, a connectivity of 1 corresponds to 4-connected 
neighbors and 2 corresponds to 8-connected neighbors.
+
+One example usage of this operator can be the identification of regions above 
a certain threshold, as in the command below.
+With this command, Arithmetic will first separate all pixels greater than 100 
into a binary image (where pixels with a value of 1 are above that value).
+Afterwards, it will label all those that are connected.
 
 @example
 $ astarithmetic in.fits 100 gt 2 connected-components
 @end example
 
-If your input dataset doesn't have a binary type, but you know all its
-values are 0 or 1, you can use the @code{uint8} operator (below) to convert
-it to binary.
+If your input dataset doesn't have a binary type, but you know all its values 
are 0 or 1, you can use the @code{uint8} operator (below) to convert it to 
binary.
 
 @item fill-holes
-Flip background (0) pixels surrounded by foreground (1) in a binary
-dataset. This operator takes two operands (similar to
-@code{connected-components}): the first popped operand is the connectivity
-(to define a hole) and the second is the binary (0 or 1 valued) dataset to
-fill holes in.
+Flip background (0) pixels surrounded by foreground (1) in a binary dataset.
+This operator takes two operands (similar to @code{connected-components}): the 
first popped operand is the connectivity (to define a hole) and the second is 
the binary (0 or 1 valued) dataset to fill holes in.
 
 @item invert
-Invert an unsigned integer dataset. This is the only operator that ignores
-blank values (which are set to be the maximum values in the unsigned
-integer types).
-
-This is useful in cases where the target(s) has(have) been imaged in
-absorption as raw formats (which are unsigned integer types). With this
-option, the maximum value for the given type will be subtracted from each
-pixel value, thus ``inverting'' the image, so the target(s) can be treated
-as emission. This can be useful when the higher-level analysis
-methods/tools only work on emission (positive skew in the noise, not
-negative).
+Invert an unsigned integer dataset.
+This is the only operator that ignores blank values (which are set to be the 
maximum values in the unsigned integer types).
+
+This is useful in cases where the target(s) has(have) been imaged in 
absorption as raw formats (which are unsigned integer types).
+With this option, the maximum value for the given type will be subtracted from 
each pixel value, thus ``inverting'' the image, so the target(s) can be treated 
as emission.
+This can be useful when the higher-level analysis methods/tools only work on 
emission (positive skew in the noise, not negative).
 
 @item lt
-Less than: If the second popped (or left operand in infix notation, see
-@ref{Reverse polish notation}) value is smaller than the first popped
-operand, then this function will return a value of 1, otherwise it will
-return a value of 0. If both operands are images, then all the pixels will
-be compared with their counterparts in the other image. If only one operand
-is an image, then all the pixels will be compared with the the single value
-(number) of the other operand. Finally if both are numbers, then the output
-is also just one number (0 or 1). When the output is not a single number,
-it will be stored as an @code{unsigned char} type.
+Less than: If the second popped (or left operand in infix notation, see 
@ref{Reverse polish notation}) value is smaller than the first popped operand, 
then this function will return a value of 1, otherwise it will return a value 
of 0.
+If both operands are images, then all the pixels will be compared with their 
counterparts in the other image.
+If only one operand is an image, then all the pixels will be compared with the 
single value (number) of the other operand.
+Finally if both are numbers, then the output is also just one number (0 or 1).
+When the output is not a single number, it will be stored as an @code{unsigned 
char} type.
 
 @item le
-Less or equal: similar to @code{lt} (`less than' operator), but returning 1
-when the second popped operand is smaller or equal to the first.
+Less or equal: similar to @code{lt} (`less than' operator), but returning 1 
when the second popped operand is smaller or equal to the first.
 
 @item gt
-Greater than: similar to @code{lt} (`less than' operator), but
-returning 1 when the second popped operand is greater than the first.
+Greater than: similar to @code{lt} (`less than' operator), but returning 1 
when the second popped operand is greater than the first.
 
 @item ge
-Greater or equal: similar to @code{lt} (`less than' operator), but
-returning 1 when the second popped operand is larger or equal to the first.
+Greater or equal: similar to @code{lt} (`less than' operator), but returning 1 
when the second popped operand is larger or equal to the first.
 
 @item eq
-Equality: similar to @code{lt} (`less than' operator), but returning 1 when
-the two popped operands are equal (to double precision floating point
+Equality: similar to @code{lt} (`less than' operator), but returning 1 when 
the two popped operands are equal (to double precision floating point
 accuracy).
 
 @item ne
-Non-Equality: similar to @code{lt} (`less than' operator), but returning 1
-when the two popped operands are @emph{not} equal (to double precision
-floating point accuracy).
+Non-Equality: similar to @code{lt} (`less than' operator), but returning 1 
when the two popped operands are @emph{not} equal (to double precision floating 
point accuracy).
 
 @item and
-Logical AND: returns 1 if both operands have a non-zero value and 0 if both
-are zero. Both operands have to be the same kind: either both images or
-both numbers.
+Logical AND: returns 1 if both operands have a non-zero value and 0 if both 
are zero.
+Both operands have to be the same kind: either both images or both numbers.
 
 @item or
-Logical OR: returns 1 if either one of the operands is non-zero and 0 only
-when both operators are zero. Both operands have to be the same kind:
-either both images or both numbers.
+Logical OR: returns 1 if either one of the operands is non-zero and 0 only 
when both operators are zero.
+Both operands have to be the same kind: either both images or both numbers.
 
 @item not
-Logical NOT: returns 1 when the operand is zero and 0 when the operand is
-non-zero. The operand can be an image or number, for an image, it is
-applied to each pixel separately.
+Logical NOT: returns 1 when the operand is zero and 0 when the operand is 
non-zero.
+The operand can be an image or number, for an image, it is applied to each 
pixel separately.
 
 @cindex Blank pixel
 @item isblank
-Test for a blank value (see @ref{Blank pixels}). In essence, this is very
-similar to the conditional operators: the output is either 1 or 0 (see the
-`less than' operator above). The difference is that it only needs one
-operand. Because of the definition of a blank pixel, a blank value is not
-even equal to itself, so you cannot use the equal operator above to select
-blank pixels. See the ``Blank pixels'' box below for more on Blank pixels
-in Arithmetic.
+Test for a blank value (see @ref{Blank pixels}).
+In essence, this is very similar to the conditional operators: the output is 
either 1 or 0 (see the `less than' operator above).
+The difference is that it only needs one operand.
+Because of the definition of a blank pixel, a blank value is not even equal to 
itself, so you cannot use the equal operator above to select blank pixels.
+See the ``Blank pixels'' box below for more on Blank pixels in Arithmetic.
 
 @item where
-Change the input (pixel) value @emph{where}/if a certain condition
-holds. The conditional operators above can be used to define the
-condition. Three operands are required for @command{where}. The input
-format is demonstrated in this simplified example:
+Change the input (pixel) value @emph{where}/if a certain condition holds.
+The conditional operators above can be used to define the condition.
+Three operands are required for @command{where}.
+The input format is demonstrated in this simplified example:
 
 @example
 $ astarithmetic modify.fits binary.fits if-true.fits where
 @end example
 
-The value of any pixel in @file{modify.fits} that corresponds to a non-zero
-@emph{and} non-blank pixel of @file{binary.fits} will be changed to the
-value of the same pixel in @file{if-true.fits} (this may also be a
-number). The 3rd and 2nd popped operands (@file{modify.fits} and
-@file{binary.fits} respectively, see @ref{Reverse polish notation}) have to
-have the same dimensions/size. @file{if-true.fits} can be either a number,
-or have the same dimension/size as the other two.
+The value of any pixel in @file{modify.fits} that corresponds to a non-zero 
@emph{and} non-blank pixel of @file{binary.fits} will be changed to the value 
of the same pixel in @file{if-true.fits} (this may also be a number).
+The 3rd and 2nd popped operands (@file{modify.fits} and @file{binary.fits} 
respectively, see @ref{Reverse polish notation}) have to have the same 
dimensions/size.
+@file{if-true.fits} can be either a number, or have the same dimension/size as 
the other two.
 
-The 2nd popped operand (@file{binary.fits}) has to have @code{uint8} (or
-@code{unsigned char} in standard C) type (see @ref{Numeric data types}). It
-is treated as a binary dataset (with only two values: zero and non-zero,
-hence the name @code{binary.fits} in this example).  However, commonly you
-won't be dealing with an actual FITS file of a condition/binary image. You
-will probably define the condition in the same run based on some other
-reference image and use the conditional and logical operators above to make
-a true/false (or one/zero) image for you internally. For example the case
-below:
+The 2nd popped operand (@file{binary.fits}) has to have @code{uint8} (or 
@code{unsigned char} in standard C) type (see @ref{Numeric data types}).
+It is treated as a binary dataset (with only two values: zero and non-zero, 
hence the name @code{binary.fits} in this example).
+However, commonly you won't be dealing with an actual FITS file of a 
condition/binary image.
+You will probably define the condition in the same run based on some other 
reference image and use the conditional and logical operators above to make a 
true/false (or one/zero) image for you internally.
+For example the case below:
 
 @example
 $ astarithmetic in.fits reference.fits 100 gt new.fits where
 @end example
 
-In the example above, any of the @file{in.fits} pixels that has a value in
-@file{reference.fits} greater than @command{100}, will be replaced with the
-corresponding pixel in @file{new.fits}. Effectively the
-@code{reference.fits 100 gt} part created the condition/binary image which
-was added to the stack (in memory) and later used by @code{where}. The
-command above is thus equivalent to these two commands:
+In the example above, any of the @file{in.fits} pixels that has a value in 
@file{reference.fits} greater than @command{100}, will be replaced with the 
corresponding pixel in @file{new.fits}.
+Effectively the @code{reference.fits 100 gt} part created the condition/binary 
image which was added to the stack (in memory) and later used by @code{where}.
+The command above is thus equivalent to these two commands:
 
 @example
 $ astarithmetic reference.fits 100 gt --output=binary.fits
 $ astarithmetic in.fits binary.fits new.fits where
 @end example
 
-Finally, the input operands are read and used independently, so you can use
-the same file more than once as any of the operands.
+Finally, the input operands are read and used independently, so you can use 
the same file more than once as any of the operands.
 
-When the 1st popped operand to @code{where} (@file{if-true.fits}) is a
-single number, it may be a NaN value (or any blank value, depending on its
-type) like the example below (see @ref{Blank pixels}). When the number is
-blank, it will be converted to the blank value of the type of the 3rd
-popped operand (@code{in.fits}). Hence, in the example below, all the
-pixels in @file{reference.fits} that have a value greater than 100, will
-become blank in the natural data type of @file{in.fits} (even though NaN
-values are only defined for floating point types).
+When the 1st popped operand to @code{where} (@file{if-true.fits}) is a single 
number, it may be a NaN value (or any blank value, depending on its type) like 
the example below (see @ref{Blank pixels}).
+When the number is blank, it will be converted to the blank value of the type 
of the 3rd popped operand (@code{in.fits}).
+Hence, in the example below, all the pixels in @file{reference.fits} that have 
a value greater than 100, will become blank in the natural data type of 
@file{in.fits} (even though NaN values are only defined for floating point 
types).
 
 @example
 $ astarithmetic in.fits reference.fits 100 gt nan where
 @end example
 
 @item bitand
-Bitwise AND operator: only bits with values of 1 in both popped operands
-will get the value of 1, the rest will be set to 0. For example (assuming
-numbers can be written as bit strings on the command-line): @code{00101000
-00100010 bitand} will give @code{00100000}. Note that the bitwise operators
-only work on integer type datasets.
+Bitwise AND operator: only bits with values of 1 in both popped operands will 
get the value of 1, the rest will be set to 0.
+For example (assuming numbers can be written as bit strings on the 
command-line): @code{00101000 00100010 bitand} will give @code{00100000}.
+Note that the bitwise operators only work on integer type datasets.
 
 @item bitor
-Bitwise inclusive OR operator: The bits where at least one of the two popped
-operands has a 1 value get a value of 1, the others 0. For example
-(assuming numbers can be written as bit strings on the command-line):
-@code{00101000 00100010 bitand} will give @code{00101010}. Note that the
-bitwise operators only work on integer type datasets.
+Bitwise inclusive OR operator: The bits where at least one of the two popped 
operands has a 1 value get a value of 1, the others 0.
+For example (assuming numbers can be written as bit strings on the 
command-line): @code{00101000 00100010 bitand} will give @code{00101010}.
+Note that the bitwise operators only work on integer type datasets.
 
 @item bitxor
-Bitwise exclusive OR operator: A bit will be 1 if it differs between the
-two popped operands. For example (assuming numbers can be written as bit
-strings on the command-line): @code{00101000 00100010 bitand} will give
-@code{00001010}. Note that the bitwise operators only work on integer type
-datasets.
+Bitwise exclusive OR operator: A bit will be 1 if it differs between the two 
popped operands.
+For example (assuming numbers can be written as bit strings on the 
command-line): @code{00101000 00100010 bitand} will give @code{00001010}.
+Note that the bitwise operators only work on integer type datasets.
 
 @item lshift
-Bitwise left shift operator: shift all the bits of the first operand to the
-left by a number of times given by the second operand. For example
-(assuming numbers can be written as bit strings on the command-line):
-@code{00101000 2 lshift} will give @code{10100000}. This is equivalent to
-multiplication by 4.  Note that the bitwise operators only work on integer
-type datasets.
+Bitwise left shift operator: shift all the bits of the first operand to the 
left by a number of times given by the second operand.
+For example (assuming numbers can be written as bit strings on the 
command-line): @code{00101000 2 lshift} will give @code{10100000}.
+This is equivalent to multiplication by 4.
+Note that the bitwise operators only work on integer type datasets.
 
 @item rshift
-Bitwise right shift operator: shift all the bits of the first operand to
-the right by a number of times given by the second operand. For example
-(assuming numbers can be written as bit strings on the command-line):
-@code{00101000 2 rshift} will give @code{00001010}. Note that the bitwise
-operators only work on integer type datasets.
+Bitwise right shift operator: shift all the bits of the first operand to the 
right by a number of times given by the second operand.
+For example (assuming numbers can be written as bit strings on the 
command-line): @code{00101000 2 rshift} will give @code{00001010}.
+Note that the bitwise operators only work on integer type datasets.
 
 @item bitnot
-Bitwise not (more formally known as one's complement) operator: flip all
-the bits of the popped operand (note that this is the only unary, or single
-operand, bitwise operator). In other words, any bit with a value of
-@code{0} is changed to @code{1} and vice-versa. For example (assuming
-numbers can be written as bit strings on the command-line): @code{00101000
-bitnot} will give @code{11010111}. Note that the bitwise operators only
-work on integer type datasets/numbers.
+Bitwise not (more formally known as one's complement) operator: flip all the 
bits of the popped operand (note that this is the only unary, or single 
operand, bitwise operator).
+In other words, any bit with a value of @code{0} is changed to @code{1} and 
vice-versa.
+For example (assuming numbers can be written as bit strings on the 
command-line): @code{00101000 bitnot} will give @code{11010111}.
+Note that the bitwise operators only work on integer type datasets/numbers.
 
 @item uint8
-Convert the type of the popped operand to 8-bit unsigned integer type (see
-@ref{Numeric data types}). The internal conversion of C will be used.
+Convert the type of the popped operand to 8-bit unsigned integer type (see 
@ref{Numeric data types}).
+The internal conversion of C will be used.
 
 @item int8
-Convert the type of the popped operand to 8-bit signed integer type (see
-@ref{Numeric data types}). The internal conversion of C will be used.
+Convert the type of the popped operand to 8-bit signed integer type (see 
@ref{Numeric data types}).
+The internal conversion of C will be used.
 
 @item uint16
-Convert the type of the popped operand to 16-bit unsigned integer type
-(see @ref{Numeric data types}). The internal conversion of C will be used.
+Convert the type of the popped operand to 16-bit unsigned integer type (see 
@ref{Numeric data types}).
+The internal conversion of C will be used.
 
 @item int16
-Convert the type of the popped operand to 16-bit signed integer (see
-@ref{Numeric data types}). The internal conversion of C will be used.
+Convert the type of the popped operand to 16-bit signed integer (see 
@ref{Numeric data types}).
+The internal conversion of C will be used.
 
 @item uint32
-Convert the type of the popped operand to 32-bit unsigned integer type
-(see @ref{Numeric data types}). The internal conversion of C will be used.
+Convert the type of the popped operand to 32-bit unsigned integer type (see 
@ref{Numeric data types}).
+The internal conversion of C will be used.
 
 @item int32
-Convert the type of the popped operand to 32-bit signed integer type (see
-@ref{Numeric data types}). The internal conversion of C will be used.
+Convert the type of the popped operand to 32-bit signed integer type (see 
@ref{Numeric data types}).
+The internal conversion of C will be used.
 
 @item uint64
-Convert the type of the popped operand to 64-bit unsigned integer (see
-@ref{Numeric data types}). The internal conversion of C will be used.
+Convert the type of the popped operand to 64-bit unsigned integer (see 
@ref{Numeric data types}).
+The internal conversion of C will be used.
 
 @item float32
-Convert the type of the popped operand to 32-bit (single precision)
-floating point (see @ref{Numeric data types}). The internal conversion of C
-will be used.
+Convert the type of the popped operand to 32-bit (single precision) floating 
point (see @ref{Numeric data types}).
+The internal conversion of C will be used.
 
 @item float64
-Convert the type of the popped operand to 64-bit (double precision)
-floating point (see @ref{Numeric data types}). The internal conversion of C
-will be used.
+Convert the type of the popped operand to 64-bit (double precision) floating 
point (see @ref{Numeric data types}).
+The internal conversion of C will be used.
 
 @item set-AAA
-Set the characters after the dash (@code{AAA} in the case shown here) as a
-name for the first popped operand on the stack. The named dataset will be
-freed from memory as soon as it is no longer needed, or if the name is
-reset to refer to another dataset later in the command. This operator thus
-enables re-usability of a dataset without having to re-read it from a file
-every time it is necessary during a process. When a dataset is necessary
-more than once, this operator can thus help simplify reading/writing on the
-command-line (thus avoiding potential bugs), while also speeding up the
-processing.
+Set the characters after the dash (@code{AAA} in the case shown here) as a 
name for the first popped operand on the stack.
+The named dataset will be freed from memory as soon as it is no longer needed, 
or if the name is reset to refer to another dataset later in the command.
+This operator thus enables re-usability of a dataset without having to re-read 
it from a file every time it is necessary during a process.
+When a dataset is necessary more than once, this operator can thus help 
simplify reading/writing on the command-line (thus avoiding potential bugs), 
while also speeding up the processing.
 
-Like all operators, this operator pops the top operand off of the main
-processing stack, but unlike other operands, it won't add anything back to
-the stack immediately. It will keep the popped dataset in memory through a
-separate list of named datasets (not on the main stack). That list will be
-used to add/copy any requested dataset to the main processing stack when
-the name is called.
+Like all operators, this operator pops the top operand off of the main 
processing stack, but unlike other operands, it won't add anything back to the 
stack immediately.
+It will keep the popped dataset in memory through a separate list of named 
datasets (not on the main stack).
+That list will be used to add/copy any requested dataset to the main 
processing stack when the name is called.
 
-The name to give the popped dataset is part of the operator's name. For
-example the @code{set-a} operator of the command below, gives the name
-``@code{a}'' to the contents of @file{image.fits}. This name is then used
-instead of the actual filename to multiply the dataset by two.
+The name to give the popped dataset is part of the operator's name.
+For example the @code{set-a} operator of the command below, gives the name 
``@code{a}'' to the contents of @file{image.fits}.
+This name is then used instead of the actual filename to multiply the dataset 
by two.
 
 @example
 $ astarithmetic image.fits set-a a 2 x
 @end example
 
-The name can be any string, but avoid strings ending with standard filename
-suffixes (for example @file{.fits})@footnote{A dataset name like
-@file{a.fits} (which can be set with @command{set-a.fits}) will cause
-confusion in the the initial parser of Arithmetic. It will assume this name
-is a FITS file, and if it is used multiple times, Arithmetic will abort,
-complaining that you haven't provided enough HDUs.}.
+The name can be any string, but avoid strings ending with standard filename 
suffixes (for example @file{.fits})@footnote{A dataset name like @file{a.fits} 
(which can be set with @command{set-a.fits}) will cause confusion in the 
initial parser of Arithmetic.
+It will assume this name is a FITS file, and if it is used multiple times, 
Arithmetic will abort, complaining that you haven't provided enough HDUs.}.
 
-One example of the usefulness of this operator is in the @code{where}
-operator. For example, let's assume you want to mask all pixels larger than
-@code{5} in @file{image.fits} (extension number 1) with a NaN
-value. Without setting a name for the dataset, you have to read the file
-two times from memory in a command like this:
+One example of the usefulness of this operator is in the @code{where} operator.
+For example, let's assume you want to mask all pixels larger than @code{5} in 
@file{image.fits} (extension number 1) with a NaN value.
+Without setting a name for the dataset, you have to read the file two times 
from memory in a command like this:
 
 @example
 $ astarithmetic image.fits image.fits 5 gt nan where -g1
 @end example
 
-But with this operator you can simply give @file{image.fits} the name
-@code{i} and simplify the command above to the more readable one below
-(which greatly helps when the filename is long):
+But with this operator you can simply give @file{image.fits} the name @code{i} 
and simplify the command above to the more readable one below (which greatly 
helps when the filename is long):
 
 @example
 $ astarithmetic image.fits set-i   i i 5 gt nan where
 @end example
 
 @item tofile-AAA
-Write the top operand on the operands stack into a file called @code{AAA}
-(can be any FITS file name) without changing the operands stack. If you
-don't need the dataset any more and would like to free it, see the
-@code{tofilefree} operator below.
-
-By default, any file that is given to this operator is deleted before
-Arithmetic actually starts working on the input datasets. The deletion can
-be deactivated with the @option{--dontdelete} option (as in all Gnuastro
-programs, see @ref{Input output options}). If the same FITS file is given
-to this operator multiple times, it will contain multiple extensions (in
-the same order that it was called.
-
-For example the operator @command{tofile-check.fits} will write the top
-operand to @file{check.fits}. Since it doesn't modify the operands stack,
-this operator is very convenient when you want to debug, or understanding,
-a string of operators and operands given to Arithmetic: simply put
-@command{tofile-AAA} anywhere in the process to see what is happening
-behind the scenes without modifying the overall process.
+Write the top operand on the operands stack into a file called @code{AAA} (can 
be any FITS file name) without changing the operands stack.
+If you don't need the dataset any more and would like to free it, see the 
@code{tofilefree} operator below.
+
+By default, any file that is given to this operator is deleted before 
Arithmetic actually starts working on the input datasets.
+The deletion can be deactivated with the @option{--dontdelete} option (as in 
all Gnuastro programs, see @ref{Input output options}).
+If the same FITS file is given to this operator multiple times, it will 
contain multiple extensions (in the same order that it was called.
+
+For example the operator @command{tofile-check.fits} will write the top 
operand to @file{check.fits}.
+Since it doesn't modify the operands stack, this operator is very convenient 
when you want to debug, or understanding, a string of operators and operands 
given to Arithmetic: simply put @command{tofile-AAA} anywhere in the process to 
see what is happening behind the scenes without modifying the overall process.
 
 @item tofilefree-AAA
-Similar to the @code{tofile} operator, with the only difference that the
-dataset that is written to a file is popped from the operand stack and
-freed from memory (cannot be used any more).
+Similar to the @code{tofile} operator, with the only difference that the 
dataset that is written to a file is popped from the operand stack and freed 
from memory (cannot be used any more).
 
 @end table
 
 @cartouche
 @noindent
-@strong{Blank pixels in Arithmetic:} Blank pixels in the image (see
-@ref{Blank pixels}) will be stored based on the data type. When the input
-is floating point type, blank values are NaN. One aspect of NaN values is
-that by definition they will fail on @emph{any} comparison. Hence both
-equal and not-equal operators will fail when both their operands are NaN!
-Therefore, the only way to guarantee selection of blank pixels is through
-the @command{isblank} operator explained above.
+@strong{Blank pixels in Arithmetic:} Blank pixels in the image (see @ref{Blank 
pixels}) will be stored based on the data type.
+When the input is floating point type, blank values are NaN.
+One aspect of NaN values is that by definition they will fail on @emph{any} 
comparison.
+Hence both equal and not-equal operators will fail when both their operands 
are NaN!
+Therefore, the only way to guarantee selection of blank pixels is through the 
@command{isblank} operator explained above.
 
-One way you can exploit this property of the NaN value to your advantage is
-when you want a fully zero-valued image (even over the blank pixels) based
-on an already existing image (with same size and world coordinate system
-settings). The following command will produce this for you:
+One way you can exploit this property of the NaN value to your advantage is 
when you want a fully zero-valued image (even over the blank pixels) based on 
an already existing image (with same size and world coordinate system settings).
+The following command will produce this for you:
 
 @example
 $ astarithmetic input.fits nan eq --output=all-zeros.fits
 @end example
 
 @noindent
-Note that on the command-line you can write NaN in any case (for example
-@command{NaN}, or @command{NAN} are also acceptable). Reading NaN as a
-floating point number in Gnuastro isn't case-sensitive.
+Note that on the command-line you can write NaN in any case (for example 
@command{NaN}, or @command{NAN} are also acceptable).
+Reading NaN as a floating point number in Gnuastro isn't case-sensitive.
 @end cartouche
 
 
 @node Invoking astarithmetic,  , Arithmetic operators, Arithmetic
 @subsection Invoking Arithmetic
 
-Arithmetic will do pixel to pixel arithmetic operations on the individual
-pixels of input data and/or numbers. For the full list of operators with
-explanations, please see @ref{Arithmetic operators}. Any operand that only
-has a single element (number, or single pixel FITS image) will be read as a
-number, the rest of the inputs must have the same dimensions. The general
-template is:
+Arithmetic will do pixel to pixel arithmetic operations on the individual 
pixels of input data and/or numbers.
+For the full list of operators with explanations, please see @ref{Arithmetic 
operators}.
+Any operand that only has a single element (number, or single pixel FITS 
image) will be read as a number, the rest of the inputs must have the same 
dimensions.
+The general template is:
 
 @example
 $ astarithmetic [OPTION...] ASTRdata1 [ASTRdata2] OPERATOR ...
@@ -13678,91 +10185,60 @@ $ astarithmetic img1.fits img2.fits img3.fits median  
              \
                 -h0 -h1 -h2 --out=median.fits
 @end example
 
-Arithmetic's notation for giving operands to operators is fully described
-in @ref{Reverse polish notation}. The output dataset is last remaining
-operand on the stack. When the output dataset a single number, it will be
-printed on the command-line. When the output is an array, it will be stored
-as a file.
-
-The name of the final file can be specified with the @option{--output}
-option, but if its not given, Arithmetic will use ``automatic output'' on
-the name of the first FITS image encountered to generate an output file
-name, see @ref{Automatic output}. By default, if the output file already
-exists, it will be deleted before Arithmetic starts operation. However,
-this can be disabled with the @option{--dontdelete} option (see below). At
-any point during Arithmetic's operation, you can also write the top operand
-on the stack to a file, using the @code{tofile} or @code{tofilefree}
-operators, see @ref{Arithmetic operators}.
-
-By default, the world coordinate system (WCS) information of the output
-dataset will be taken from the first input image (that contains a WCS) on
-the command-line. This can be modified with the @option{--wcsfile} and
-@option{--wcshdu} options described below. When the @option{--quiet} option
-isn't given, the name and extension of the dataset used for the output's
-WCS is printed on the command-line.
-
-Through operators like those starting with @code{collapse-}, the
-dimensionality of the inputs may not be the same as the outputs. By
-default, when the output is 1D, Arithmetic will write it as a table, not an
-image/array. The format of the output table (plain text or FITS ASCII or
-binary) can be set with the @option{--tableformat} option, see @ref{Input
-output options}). You can disable this feature (write 1D arrays as FITS
-images/arrays) with the @option{--onedasimage} option.
-
-See @ref{Common options} for a review of the options in all Gnuastro
-programs. Arithmetic just redefines the @option{--hdu} and
-@option{--dontdelete} options as explained below.
+Arithmetic's notation for giving operands to operators is fully described in 
@ref{Reverse polish notation}.
+The output dataset is last remaining operand on the stack.
+When the output dataset a single number, it will be printed on the 
command-line.
+When the output is an array, it will be stored as a file.
+
+The name of the final file can be specified with the @option{--output} option, 
but if its not given, Arithmetic will use ``automatic output'' on the name of 
the first FITS image encountered to generate an output file name, see 
@ref{Automatic output}.
+By default, if the output file already exists, it will be deleted before 
Arithmetic starts operation.
+However, this can be disabled with the @option{--dontdelete} option (see 
below).
+At any point during Arithmetic's operation, you can also write the top operand 
on the stack to a file, using the @code{tofile} or @code{tofilefree} operators, 
see @ref{Arithmetic operators}.
+
+By default, the world coordinate system (WCS) information of the output 
dataset will be taken from the first input image (that contains a WCS) on the 
command-line.
+This can be modified with the @option{--wcsfile} and @option{--wcshdu} options 
described below.
+When the @option{--quiet} option isn't given, the name and extension of the 
dataset used for the output's WCS is printed on the command-line.
+
+Through operators like those starting with @code{collapse-}, the 
dimensionality of the inputs may not be the same as the outputs.
+By default, when the output is 1D, Arithmetic will write it as a table, not an 
image/array.
+The format of the output table (plain text or FITS ASCII or binary) can be set 
with the @option{--tableformat} option, see @ref{Input output options}).
+You can disable this feature (write 1D arrays as FITS images/arrays, or to the 
standard output) with the @option{--onedasimage} or @option{--onedonstdout} 
options.
+
+See @ref{Common options} for a review of the options in all Gnuastro programs.
+Arithmetic just redefines the @option{--hdu} and @option{--dontdelete} options 
as explained below.
 
 @table @option
 
 @item -h INT/STR
 @itemx --hdu INT/STR
-The header data unit of the input FITS images, see @ref{Input output
-options}. Unlike most options in Gnuastro (which will ultimately only have
-one value for this option), Arithmetic allows @option{--hdu} to be called
-multiple times and the value of each invocation will be stored separately
-(for the unlimited number of input images you would like to use). Recall
-that for other programs this (common) option only takes a single value. So
-in other programs, if you specify it multiple times on the command-line,
-only the last value will be used and in the configuration files, it will be
-ignored if it already has a value.
-
-The order of the values to @option{--hdu} has to be in the same order as
-input FITS images. Options are first read from the command-line (from left
-to right), then top-down in each configuration file, see @ref{Configuration
-file precedence}.
-
-If the number of HDUs is less than the number of input images,
-Arithmetic will abort and notify you. However, if there are more HDUs
-than FITS images, there is no problem: they will be used in the given
-order (every time a FITS image comes up on the stack) and the extra
-HDUs will be ignored in the end. So there is no problem with having
-extra HDUs in the configuration files and by default several HDUs with
-a value of @option{0} are kept in the system-wide configuration file
-when you install Gnuastro.
+The header data unit of the input FITS images, see @ref{Input output options}.
+Unlike most options in Gnuastro (which will ultimately only have one value for 
this option), Arithmetic allows @option{--hdu} to be called multiple times and 
the value of each invocation will be stored separately (for the unlimited 
number of input images you would like to use).
+Recall that for other programs this (common) option only takes a single value.
+So in other programs, if you specify it multiple times on the command-line, 
only the last value will be used and in the configuration files, it will be 
ignored if it already has a value.
+
+The order of the values to @option{--hdu} has to be in the same order as input 
FITS images.
+Options are first read from the command-line (from left to right), then 
top-down in each configuration file, see @ref{Configuration file precedence}.
+
+If the number of HDUs is less than the number of input images, Arithmetic will 
abort and notify you.
+However, if there are more HDUs than FITS images, there is no problem: they 
will be used in the given order (every time a FITS image comes up on the stack) 
and the extra HDUs will be ignored in the end.
+So there is no problem with having extra HDUs in the configuration files and 
by default several HDUs with a value of @option{0} are kept in the system-wide 
configuration file when you install Gnuastro.
 
 @item -g INT/STR
 @itemx --globalhdu INT/STR
-Use the value to this option as the HDU of all input FITS files. This
-option is very convenient when you have many input files and the dataset of
-interest is in the same HDU of all the files. When this option is called,
-any values given to the @option{--hdu} option (explained above) are ignored
-and will not be used.
+Use the value to this option as the HDU of all input FITS files.
+This option is very convenient when you have many input files and the dataset 
of interest is in the same HDU of all the files.
+When this option is called, any values given to the @option{--hdu} option 
(explained above) are ignored and will not be used.
 
 @item -w STR
 @itemx --wcsfile STR
-FITS Filename containing the WCS structure that must be written to the
-output. The HDU/extension should be specified with @option{--wcshdu}.
+FITS Filename containing the WCS structure that must be written to the output.
+The HDU/extension should be specified with @option{--wcshdu}.
 
-When this option is used, the respective WCS will be read before any
-processing is done on the command-line and directly used in the final
-output. If the given file doesn't have any WCS, then the default WCS (first
-file on the command-line with WCS) will be used in the output.
+When this option is used, the respective WCS will be read before any 
processing is done on the command-line and directly used in the final output.
+If the given file doesn't have any WCS, then the default WCS (first file on 
the command-line with WCS) will be used in the output.
 
-This option will mostly be used when the default file (first of the set of
-inputs) is not the one containing your desired WCS. But with this option,
-you can also use Arithmetic to rewrite/change the WCS of an existing FITS
-dataset from another file:
+This option will mostly be used when the default file (first of the set of 
inputs) is not the one containing your desired WCS.
+But with this option, you can also use Arithmetic to rewrite/change the WCS of 
an existing FITS dataset from another file:
 
 @example
 $ astarithmetic data.fits --wcsfile=other.fits -ofinal.fits
@@ -13770,53 +10246,41 @@ $ astarithmetic data.fits --wcsfile=other.fits 
-ofinal.fits
 
 @item -W STR
 @itemx --wcshdu STR
-HDU/extension to read the WCS within the file given to
-@option{--wcsfile}. For more, see the description of @option{--wcsfile}.
+HDU/extension to read the WCS within the file given to @option{--wcsfile}.
+For more, see the description of @option{--wcsfile}.
 
 @item -O
 @itemx --onedasimage
-When final dataset to write as output only has one dimension, write it as a
-FITS image/array. By default, if the output is 1D, it will be written as a
-table, see above.
+When final dataset to write as output only has one dimension, write it as a 
FITS image/array.
+By default, if the output is 1D, it will be written as a table, see above.
+
+@item -s
+@itemx --onedonstdout
+When final dataset to write as output only has one dimension, print it on the 
standard output, not in a file.
+By default, if the output is 1D, it will be written as a table, see above.
 
 @item -D
 @itemx --dontdelete
-Don't delete the output file, or files given to the @code{tofile} or
-@code{tofilefree} operators, if they already exist. Instead append the
-desired datasets to the extensions that already exist in the respective
-file. Note it doesn't matter if the final output file name is given with
-the the @option{--output} option, or determined automatically.
-
-Arithmetic treats this option differently from its default operation in
-other Gnuastro programs (see @ref{Input output options}). If the output
-file exists, when other Gnuastro programs are called with
-@option{--dontdelete}, they simply complain and abort. But when Arithmetic
-is called with @option{--dontdelete}, it will appended the dataset(s) to
-the existing extension(s) in the file.
+Don't delete the output file, or files given to the @code{tofile} or 
@code{tofilefree} operators, if they already exist.
+Instead append the desired datasets to the extensions that already exist in 
the respective file.
+Note it doesn't matter if the final output file name is given with the 
@option{--output} option, or determined automatically.
+
+Arithmetic treats this option differently from its default operation in other 
Gnuastro programs (see @ref{Input output options}).
+If the output file exists, when other Gnuastro programs are called with 
@option{--dontdelete}, they simply complain and abort.
+But when Arithmetic is called with @option{--dontdelete}, it will appended the 
dataset(s) to the existing extension(s) in the file.
 @end table
 
-Arithmetic accepts two kinds of input: images and numbers. Images are
-considered to be any of the inputs that is a file name of a recognized type
-(see @ref{Arguments}) and has more than one element/pixel. Numbers on the
-command-line will be read into the smallest type (see @ref{Numeric data
-types}) that can store them, so @command{-2} will be read as a @code{char}
-type (which is signed on most systems and can thus keep negative values),
-@command{2500} will be read as an @code{unsigned short} (all positive
-numbers will be read as unsigned), while @code{3.1415926535897} will be
-read as a @code{double} and @code{3.14} will be read as a @code{float}. To
-force a number to be read as float, add a @code{f} after it, so
-@command{5f} will be added to the stack as @code{float} (see @ref{Reverse
-polish notation}).
+Arithmetic accepts two kinds of input: images and numbers.
+Images are considered to be any of the inputs that is a file name of a 
recognized type (see @ref{Arguments}) and has more than one element/pixel.
+Numbers on the command-line will be read into the smallest type (see 
@ref{Numeric data types}) that can store them, so @command{-2} will be read as 
a @code{char} type (which is signed on most systems and can thus keep negative 
values), @command{2500} will be read as an @code{unsigned short} (all positive 
numbers will be read as unsigned), while @code{3.1415926535897} will be read as 
a @code{double} and @code{3.14} will be read as a @code{float}.
+To force a number to be read as float, add a @code{f} after it, so 
@command{5f} will be added to the stack as @code{float} (see @ref{Reverse 
polish notation}).
 
-Unless otherwise stated (in @ref{Arithmetic operators}), the operators can
-deal with numeric multiple data types (see @ref{Numeric data types}). For
-example in ``@command{a.fits b.fits +}'', the image types can be
-@code{long} and @code{float}. In such cases, C's internal type conversion
-will be used. The output type will be set to the higher-ranking type of the
-two inputs. Unsigned integer types have smaller ranking than their signed
-counterparts and floating point types have higher ranking than the integer
-types. So the internal C type conversions done in the example above are
-equivalent to this piece of C:
+Unless otherwise stated (in @ref{Arithmetic operators}), the operators can 
deal with numeric multiple data types (see @ref{Numeric data types}).
+For example in ``@command{a.fits b.fits +}'', the image types can be 
@code{long} and @code{float}.
+In such cases, C's internal type conversion will be used.
+The output type will be set to the higher-ranking type of the two inputs.
+Unsigned integer types have smaller ranking than their signed counterparts and 
floating point types have higher ranking than the integer types.
+So the internal C type conversions done in the example above are equivalent to 
this piece of C:
 
 @example
 size_t i;
@@ -13826,43 +10290,27 @@ for(i=0;i<100;++i) out[i]=a[i]+b[i];
 @end example
 
 @noindent
-Relying on the default C type conversion significantly speeds up the
-processing and also requires less RAM (when using very large images).
+Relying on the default C type conversion significantly speeds up the 
processing and also requires less RAM (when using very large images).
 
-Some operators can only work on integer types (of any length, for example
-bitwise operators) while others only work on floating point types,
-(currently only the @code{pow} operator). In such cases, if the operand
-type(s) are different, an error will be printed. Arithmetic also comes with
-internal type conversion operators which you can use to convert the data
-into the appropriate type, see @ref{Arithmetic operators}.
+Some operators can only work on integer types (of any length, for example 
bitwise operators) while others only work on floating point types, (currently 
only the @code{pow} operator).
+In such cases, if the operand type(s) are different, an error will be printed.
+Arithmetic also comes with internal type conversion operators which you can 
use to convert the data into the appropriate type, see @ref{Arithmetic 
operators}.
 
 @cindex Options
-The hyphen (@command{-}) can be used both to specify options (see
-@ref{Options}) and also to specify a negative number which might be
-necessary in your arithmetic. In order to enable you to do this, Arithmetic
-will first parse all the input strings and if the first character after a
-hyphen is a digit, then that hyphen is temporarily replaced by the vertical
-tab character which is not commonly used. The arguments are then parsed and
-these strings will not be specified as an option. Then the given arguments
-are parsed and any vertical tabs are replaced back with a hyphen so they
-can be read as negative numbers. Therefore, as long as the names of the
-files you want to work on, don't start with a vertical tab followed by a
-digit, there is no problem. An important consequence of this implementation
-is that you should not write negative fractions like this: @command{-.3},
-instead write them as @command{-0.3}.
+The hyphen (@command{-}) can be used both to specify options (see 
@ref{Options}) and also to specify a negative number which might be necessary 
in your arithmetic.
+In order to enable you to do this, Arithmetic will first parse all the input 
strings and if the first character after a hyphen is a digit, then that hyphen 
is temporarily replaced by the vertical tab character which is not commonly 
used.
+The arguments are then parsed and these strings will not be specified as an 
option.
+Then the given arguments are parsed and any vertical tabs are replaced back 
with a hyphen so they can be read as negative numbers.
+Therefore, as long as the names of the files you want to work on, don't start 
with a vertical tab followed by a digit, there is no problem.
+An important consequence of this implementation is that you should not write 
negative fractions like this: @command{-.3}, instead write them as 
@command{-0.3}.
 
 @cindex AWK
 @cindex GNU AWK
-Without any images, Arithmetic will act like a simple calculator and print
-the resulting output number on the standard output like the first example
-above. If you really want such calculator operations on the command-line,
-AWK (GNU AWK is the most common implementation) is much faster, easier and
-much more powerful. For example, the numerical one-line example above can
-be done with the following command. In general AWK is a fantastic tool and
-GNU AWK has a wonderful manual
-(@url{https://www.gnu.org/software/gawk/manual/}). So if you often confront
-situations like this, or have to work with large text tables/catalogs, be
-sure to checkout AWK and simplify your life.
+Without any images, Arithmetic will act like a simple calculator and print the 
resulting output number on the standard output like the first example above.
+If you really want such calculator operations on the command-line, AWK (GNU 
AWK is the most common implementation) is much faster, easier and much more 
powerful.
+For example, the numerical one-line example above can be done with the 
following command.
+In general AWK is a fantastic tool and GNU AWK has a wonderful manual 
(@url{https://www.gnu.org/software/gawk/manual/}).
+So if you often confront situations like this, or have to work with large text 
tables/catalogs, be sure to checkout AWK and simplify your life.
 
 @example
 $ echo "" | awk '@{print (10.32-3.84)^2.7@}'
@@ -13891,25 +10339,15 @@ $ echo "" | awk '@{print (10.32-3.84)^2.7@}'
 @cindex Weighted average
 @cindex Average, weighted
 @cindex Kernel, convolution
-On an image, convolution can be thought of as a process to blur or remove
-the contrast in an image. If you are already familiar with the concept and
-just want to run Convolve, you can jump to @ref{Convolution kernel} and
-@ref{Invoking astconvolve} and skip the lengthy introduction on the basic
-definitions and concepts of convolution.
-
-There are generally two methods to convolve an image. The first and
-more intuitive one is in the ``spatial domain'' or using the actual
-image pixel values, see @ref{Spatial domain convolution}. The second
-method is when we manipulate the ``frequency domain'', or work on the
-magnitudes of the different frequencies that constitute the image, see
-@ref{Frequency domain and Fourier operations}. Understanding
-convolution in the spatial domain is more intuitive and thus
-recommended if you are just starting to learn about
-convolution. However, getting a good grasp of the frequency domain is
-a little more involved and needs some concentration and some
-mathematical proofs. However, its reward is a faster operation and
-more importantly a very fundamental understanding of this very
-important operation.
+On an image, convolution can be thought of as a process to blur or remove the 
contrast in an image.
+If you are already familiar with the concept and just want to run Convolve, 
you can jump to @ref{Convolution kernel} and @ref{Invoking astconvolve} and 
skip the lengthy introduction on the basic definitions and concepts of 
convolution.
+
+There are generally two methods to convolve an image.
+The first and more intuitive one is in the ``spatial domain'' or using the 
actual image pixel values, see @ref{Spatial domain convolution}.
+The second method is when we manipulate the ``frequency domain'', or work on 
the magnitudes of the different frequencies that constitute the image, see 
@ref{Frequency domain and Fourier operations}.
+Understanding convolution in the spatial domain is more intuitive and thus 
recommended if you are just starting to learn about convolution.
+However, getting a good grasp of the frequency domain is a little more 
involved and needs some concentration and some mathematical proofs.
+However, its reward is a faster operation and more importantly a very 
fundamental understanding of this very important operation.
 
 @cindex Detection
 @cindex Atmosphere
@@ -13917,27 +10355,16 @@ important operation.
 @cindex Cosmic rays
 @cindex Pixel mixing
 @cindex Mixing pixel values
-Convolution of an image will generally result in blurring the image because
-it mixes pixel values. In other words, if the image has sharp differences
-in neighboring pixel values@footnote{In astronomy, the only major time we
-confront such sharp borders in signal are cosmic rays. All other sources of
-signal in an image are already blurred by the atmosphere or the optics of
-the instrument.}, those sharp differences will become smoother. This has
-very good consequences in detection of signal in noise for example. In an
-actual observed image, the variation in neighboring pixel values due to
-noise can be very high. But after convolution, those variations will
-decrease and we have a better hope in detecting the possible underlying
-signal. Another case where convolution is extensively used is in mock
-images and modeling in general, convolution can be used to simulate the
-effect of the atmosphere or the optical system on the mock profiles that we
-create, see @ref{PSF}. Convolution is a very interesting and important
-topic in any form of signal analysis (including astronomical
-observations). So we have thoroughly@footnote{A mathematician will
-certainly consider this explanation is incomplete and inaccurate. However
-this text is written for an understanding on the operations that are done
-on a real (not complex, discrete and noisy) astronomical image, not any
-general form of abstract function} explained the concepts behind it in the
-following sub-sections.
+Convolution of an image will generally result in blurring the image because it 
mixes pixel values.
+In other words, if the image has sharp differences in neighboring pixel 
values@footnote{In astronomy, the only major time we confront such sharp 
borders in signal are cosmic rays.
+All other sources of signal in an image are already blurred by the atmosphere 
or the optics of the instrument.}, those sharp differences will become smoother.
+This has very good consequences in detection of signal in noise for example.
+In an actual observed image, the variation in neighboring pixel values due to 
noise can be very high.
+But after convolution, those variations will decrease and we have a better 
hope in detecting the possible underlying signal.
+Another case where convolution is extensively used is in mock images and 
modeling in general, convolution can be used to simulate the effect of the 
atmosphere or the optical system on the mock profiles that we create, see 
@ref{PSF}.
+Convolution is a very interesting and important topic in any form of signal 
analysis (including astronomical observations).
+So we have thoroughly@footnote{A mathematician will certainly consider this 
explanation is incomplete and inaccurate.
+However this text is written for an understanding on the operations that are 
done on a real (not complex, discrete and noisy) astronomical image, not any 
general form of abstract function} explained the concepts behind it in the 
following sub-sections.
 
 @menu
 * Spatial domain convolution::  Only using the input image values.
@@ -13950,15 +10377,9 @@ following sub-sections.
 @node Spatial domain convolution, Frequency domain and Fourier operations, 
Convolve, Convolve
 @subsection Spatial domain convolution
 
-The pixels in an input image represent different ``spatial'' positions,
-therefore when convolution is done only using the actual input pixel
-values, we name the process as being done in the ``Spatial domain''. In
-particular this is in contrast to the ``frequency domain'' that we will
-discuss later in @ref{Frequency domain and Fourier operations}. In the
-spatial domain (and in realistic situations where the image and the
-convolution kernel don't extend to infinity), convolution is the process of
-changing the value of one pixel to the @emph{weighted} average of all the
-pixels in its @emph{neighborhood}.
+The pixels in an input image represent different ``spatial'' positions, 
therefore when convolution is done only using the actual input pixel values, we 
name the process as being done in the ``Spatial domain''.
+In particular this is in contrast to the ``frequency domain'' that we will 
discuss later in @ref{Frequency domain and Fourier operations}.
+In the spatial domain (and in realistic situations where the image and the 
convolution kernel don't extend to infinity), convolution is the process of 
changing the value of one pixel to the @emph{weighted} average of all the 
pixels in its @emph{neighborhood}.
 
 The `neighborhood' of each pixel (how many pixels in which direction) and
 the `weight' function (how much each neighboring pixel should contribute
@@ -13973,43 +10394,29 @@ as a ``kernel''@footnote{Also known as filter, here 
we will use `kernel'.}.
 @node Convolution process, Edges in the spatial domain, Spatial domain 
convolution, Spatial domain convolution
 @subsubsection Convolution process
 
-In convolution, the kernel specifies the weight and positions of the
-neighbors of each pixel. To find the convolved value of a pixel, the
-central pixel of the kernel is placed on that pixel. The values of each
-overlapping pixel in the kernel and image are multiplied by each other and
-summed for all the kernel pixels. To have one pixel in the center, the
-sides of the convolution kernel have to be an odd number. This process
-effectively mixes the pixel values of each pixel with its neighbors,
-resulting in a blurred image compared to the sharper input image.
+In convolution, the kernel specifies the weight and positions of the neighbors 
of each pixel.
+To find the convolved value of a pixel, the central pixel of the kernel is 
placed on that pixel.
+The values of each overlapping pixel in the kernel and image are multiplied by 
each other and summed for all the kernel pixels.
+To have one pixel in the center, the sides of the convolution kernel have to 
be an odd number.
+This process effectively mixes the pixel values of each pixel with its 
neighbors, resulting in a blurred image compared to the sharper input image.
 
 @cindex Linear spatial filtering
-Formally, convolution is one kind of linear `spatial filtering' in image
-processing texts. If we assume that the kernel has @mymath{2a+1} and
-@mymath{2b+1} pixels on each side, the convolved value of a pixel placed at
-@mymath{x} and @mymath{y} (@mymath{C_{x,y}}) can be calculated from the
-neighboring pixel values in the input image (@mymath{I}) and the kernel
-(@mymath{K}) from
+Formally, convolution is one kind of linear `spatial filtering' in image 
processing texts.
+If we assume that the kernel has @mymath{2a+1} and @mymath{2b+1} pixels on 
each side, the convolved value of a pixel placed at @mymath{x} and @mymath{y} 
(@mymath{C_{x,y}}) can be calculated from the neighboring pixel values in the 
input image (@mymath{I}) and the kernel (@mymath{K}) from
 
 @dispmath{C_{x,y}=\sum_{s=-a}^{a}\sum_{t=-b}^{b}K_{s,t}\times{}I_{x+s,y+t}.}
 
 @cindex Correlation
 @cindex Convolution
-Any pixel coordinate that is outside of the image in the equation above
-will be considered to be zero. When the kernel is symmetric about its
-center the blurred image has the same orientation as the original
-image. However, if the kernel is not symmetric, the image will be affected
-in the opposite manner, this is a natural consequence of the definition of
-spatial filtering. In order to avoid this we can rotate the kernel about
-its center by 180 degrees so the convolved output can have the same
-original orientation. Technically speaking, only if the kernel is flipped
-the process is known @emph{Convolution}. If it isn't it is known as
-@emph{Correlation}.
+Any pixel coordinate that is outside of the image in the equation above will 
be considered to be zero.
+When the kernel is symmetric about its center the blurred image has the same 
orientation as the original image.
+However, if the kernel is not symmetric, the image will be affected in the 
opposite manner, this is a natural consequence of the definition of spatial 
filtering.
+In order to avoid this we can rotate the kernel about its center by 180 
degrees so the convolved output can have the same original orientation.
+Technically speaking, only if the kernel is flipped the process is known 
@emph{Convolution}.
+If it isn't it is known as @emph{Correlation}.
 
-To be a weighted average, the sum of the weights (the pixels in the kernel)
-have to be unity. This will have the consequence that the convolved image
-of an object and unconvolved object will have the same brightness (see
-@ref{Flux Brightness and magnitude}), which is natural, because convolution
-should not eat up the object photons, it only disperses them.
+To be a weighted average, the sum of the weights (the pixels in the kernel) 
have to be unity.
+This will have the consequence that the convolved image of an object and 
unconvolved object will have the same brightness (see @ref{Flux Brightness and 
magnitude}), which is natural, because convolution should not eat up the object 
photons, it only disperses them.
 
 
 
@@ -14018,93 +10425,53 @@ should not eat up the object photons, it only 
disperses them.
 @node Edges in the spatial domain,  , Convolution process, Spatial domain 
convolution
 @subsubsection Edges in the spatial domain
 
-In purely `linear' spatial filtering (convolution), there are problems on
-the edges of the input image. Here we will explain the problem in the
-spatial domain. For a discussion of this problem from the frequency domain
-perspective, see @ref{Edges in the frequency domain}. The problem
-originates from the fact that on the edges, in practice@footnote{Because we
-assumed the overlapping pixels outside the input image have a value of
-zero.}, the sum of the weights we use on the actual image pixels is not
-unity. For example, as discussed above, a profile in the center of an image
-will have the same brightness before and after convolution. However, for
-partially imaged profile on the edge of the image, the brightness (sum of
-its pixel fluxes within the image, see @ref{Flux Brightness and magnitude})
-will not be equal, some of the flux is going to be `eaten' by the edges.
-
-If you ran @command{$ make check} on the source files of Gnuastro, you can
-see the this effect by comparing the @file{convolve_frequency.fits} with
-@file{convolve_spatial.fits} in the @file{./tests/} directory. In the
-spatial domain, by default, no assumption will be made about pixels outside
-of the image or any blank pixels in the image. The problem explained above
-will also occur on the sides of blank regions (see @ref{Blank pixels}). The
-solution to this edge effect problem is only possible in the spatial
-domain. For pixels near the edge, we have to abandon the assumption that
-the sum of the kernel pixels is unity during the convolution
-process@footnote{ofcourse the sum of the kernel pixels still have to be
-unity in general.}. So taking @mymath{W} as the sum of the kernel pixels
-that overlapped with non-blank and in-image pixels, the equation in
-@ref{Convolution process} will become:
+In purely `linear' spatial filtering (convolution), there are problems on the 
edges of the input image.
+Here we will explain the problem in the spatial domain.
+For a discussion of this problem from the frequency domain perspective, see 
@ref{Edges in the frequency domain}.
+The problem originates from the fact that on the edges, in 
practice@footnote{Because we assumed the overlapping pixels outside the input 
image have a value of zero.}, the sum of the weights we use on the actual image 
pixels is not unity.
+For example, as discussed above, a profile in the center of an image will have 
the same brightness before and after convolution.
+However, for partially imaged profile on the edge of the image, the brightness 
(sum of its pixel fluxes within the image, see @ref{Flux Brightness and 
magnitude}) will not be equal, some of the flux is going to be `eaten' by the 
edges.
+
+If you ran @command{$ make check} on the source files of Gnuastro, you can see 
the this effect by comparing the @file{convolve_frequency.fits} with 
@file{convolve_spatial.fits} in the @file{./tests/} directory.
+In the spatial domain, by default, no assumption will be made about pixels 
outside of the image or any blank pixels in the image.
+The problem explained above will also occur on the sides of blank regions (see 
@ref{Blank pixels}).
+The solution to this edge effect problem is only possible in the spatial 
domain.
+For pixels near the edge, we have to abandon the assumption that the sum of 
the kernel pixels is unity during the convolution process@footnote{ofcourse the 
sum of the kernel pixels still have to be unity in general.}.
+So taking @mymath{W} as the sum of the kernel pixels that overlapped with 
non-blank and in-image pixels, the equation in @ref{Convolution process} will 
become:
 
 @dispmath{C_{x,y}= { \sum_{s=-a}^{a}\sum_{t=-b}^{b}K_{s,t}\times{}I_{x+s,y+t} 
\over W}.}
 
 @noindent
-In this manner, objects which are near the edges of the image or blank
-pixels will also have the same brightness (within the image) before and
-after convolution. This correction is applied by default in Convolve when
-convolving in the spatial domain. To disable it, you can use the
-@option{--noedgecorrection} option. In the frequency domain, there is no
-way to avoid this loss of flux near the edges of the image, see @ref{Edges
-in the frequency domain} for an interpretation from the frequency domain
-perspective.
+In this manner, objects which are near the edges of the image or blank pixels 
will also have the same brightness (within the image) before and after 
convolution.
+This correction is applied by default in Convolve when convolving in the 
spatial domain.
+To disable it, you can use the @option{--noedgecorrection} option.
+In the frequency domain, there is no way to avoid this loss of flux near the 
edges of the image, see @ref{Edges in the frequency domain} for an 
interpretation from the frequency domain perspective.
 
-Note that the edge effect discussed here is different from the one in
-@ref{If convolving afterwards}. In making mock images we want to
-simulate a real observation. In a real observation the images of the
-galaxies on the sides of the CCD are first blurred by the atmosphere
-and instrument, then imaged. So light from the parts of a galaxy which
-are immediately outside the CCD will affect the parts of the galaxy
-which are covered by the CCD. Therefore in modeling the observation,
-we have to convolve an image that is larger than the input image by
-exactly half of the convolution kernel. We can hence conclude that
-this correction for the edges is only useful when working on actual
-observed images (where we don't have any more data on the edges) and
-not in modeling.
+Note that the edge effect discussed here is different from the one in @ref{If 
convolving afterwards}.
+In making mock images we want to simulate a real observation.
+In a real observation the images of the galaxies on the sides of the CCD are 
first blurred by the atmosphere and instrument, then imaged.
+So light from the parts of a galaxy which are immediately outside the CCD will 
affect the parts of the galaxy which are covered by the CCD.
+Therefore in modeling the observation, we have to convolve an image that is 
larger than the input image by exactly half of the convolution kernel.
+We can hence conclude that this correction for the edges is only useful when 
working on actual observed images (where we don't have any more data on the 
edges) and not in modeling.
 
 
 
 @node Frequency domain and Fourier operations, Spatial vs. Frequency domain, 
Spatial domain convolution, Convolve
 @subsection Frequency domain and Fourier operations
 
-Getting a good grip on the frequency domain is usually not an easy
-job! So we have decided to give the issue a complete review
-here. Convolution in the frequency domain (see @ref{Convolution
-theorem}) heavily relies on the concepts of Fourier transform
-(@ref{Fourier transform}) and Fourier series (@ref{Fourier series}) so
-we will be investigating these important operations first. It has
-become something of a clich@'e for people to say that the Fourier
-series ``is a way to represent a (wave-like) function as the sum of
-simple sine waves'' (from Wikipedia). However, sines themselves are
-abstract functions, so this statement really adds no extra layer of
-physical insight.
-
-Before jumping head-first into the equations and proofs, we will begin
-with a historical background to see how the importance of frequencies
-actually roots in our ancient desire to see everything in terms of
-circles. A short review of how the complex plane should be
-interpreted is then given. Having paved the way with these two
-basics, we define the Fourier series and subsequently the Fourier
-transform. The final aim is to explain discrete Fourier transform,
-however some very important concepts need to be solidified first: The
-Dirac comb, convolution theorem and sampling theorem. So each of these
-topics are explained in their own separate sub-sub-section before
-going on to the discrete Fourier transform. Finally we revisit (after
-@ref{Edges in the spatial domain}) the problem of convolution on the
-edges, but this time in the frequency domain. Understanding the
-sampling theorem and the discrete Fourier transform is very important
-in order to be able to pull out valuable science from the discrete
-image pixels. Therefore we have included the mathematical proofs and
-figures so you can have a clear understanding of these very important
-concepts.
+Getting a good grip on the frequency domain is usually not an easy job! So we 
have decided to give the issue a complete review here.
+Convolution in the frequency domain (see @ref{Convolution theorem}) heavily 
relies on the concepts of Fourier transform (@ref{Fourier transform}) and 
Fourier series (@ref{Fourier series}) so we will be investigating these 
important operations first.
+It has become something of a clich@'e for people to say that the Fourier 
series ``is a way to represent a (wave-like) function as the sum of simple sine 
waves'' (from Wikipedia).
+However, sines themselves are abstract functions, so this statement really 
adds no extra layer of physical insight.
+
+Before jumping head-first into the equations and proofs, we will begin with a 
historical background to see how the importance of frequencies actually roots 
in our ancient desire to see everything in terms of circles.
+A short review of how the complex plane should be interpreted is then given.
+Having paved the way with these two basics, we define the Fourier series and 
subsequently the Fourier transform.
+The final aim is to explain discrete Fourier transform, however some very 
important concepts need to be solidified first: The Dirac comb, convolution 
theorem and sampling theorem.
+So each of these topics are explained in their own separate sub-sub-section 
before going on to the discrete Fourier transform.
+Finally we revisit (after @ref{Edges in the spatial domain}) the problem of 
convolution on the edges, but this time in the frequency domain.
+Understanding the sampling theorem and the discrete Fourier transform is very 
important in order to be able to pull out valuable science from the discrete 
image pixels.
+Therefore we have included the mathematical proofs and figures so you can have 
a clear understanding of these very important concepts.
 
 @menu
 * Fourier series historical background::  Historical background.
@@ -14121,27 +10488,15 @@ concepts.
 
 @node Fourier series historical background, Circles and the complex plane, 
Frequency domain and Fourier operations, Frequency domain and Fourier operations
 @subsubsection Fourier series historical background
-Ever since the ancient times, the circle has been (and still is) the
-simplest shape for abstract comprehension. All you need is a center
-point and a radius and you are done. All the points on a circle are at
-a fixed distance from the center. However, the moment you try to
-connect this elegantly simple and beautiful abstract construct (the
-circle) with the real world (for example compute its area or its
-circumference), things become really hard (ideally, impossible)
-because the irrational number @mymath{\pi} gets involved.
-
-The key to understanding the Fourier series (thus the Fourier
-transform and finally the Discrete Fourier Transform) is our ancient
-desire to express everything in terms of circles or the most
-exceptionally simple and elegant abstract human construct. Most people
-prefer to say the same thing in a more ahistorical manner: to break a
-function into sines and cosines. As the term ``ancient'' in the
-previous sentence implies, Jean-Baptiste Joseph Fourier (1768 -- 1830
-A.D.) was not the first person to do this. The main reason we know
-this process by his name today is that he came up with an ingenious
-method to find the necessary coefficients (radius of) and frequencies
-(``speed'' of rotation on) the circles for any generic (integrable)
-function.
+Ever since the ancient times, the circle has been (and still is) the simplest 
shape for abstract comprehension.
+All you need is a center point and a radius and you are done.
+All the points on a circle are at a fixed distance from the center.
+However, the moment you try to connect this elegantly simple and beautiful 
abstract construct (the circle) with the real world (for example compute its 
area or its circumference), things become really hard (ideally, impossible) 
because the irrational number @mymath{\pi} gets involved.
+
+The key to understanding the Fourier series (thus the Fourier transform and 
finally the Discrete Fourier Transform) is our ancient desire to express 
everything in terms of circles or the most exceptionally simple and elegant 
abstract human construct.
+Most people prefer to say the same thing in a more ahistorical manner: to 
break a function into sines and cosines.
+As the term ``ancient'' in the previous sentence implies, Jean-Baptiste Joseph 
Fourier (1768 -- 1830 A.D.) was not the first person to do this.
+The main reason we know this process by his name today is that he came up with 
an ingenious method to find the necessary coefficients (radius of) and 
frequencies (``speed'' of rotation on) the circles for any generic (integrable) 
function.
 
 @float Figure,epicycle
 
@@ -14149,79 +10504,43 @@ function.
 @c jump out of the text width.
 @cindex Qutb al-Din al-Shirazi
 @cindex al-Shirazi, Qutb al-Din
-@image{gnuastro-figures/epicycles, 15.2cm, , Middle ages epicycles along
-with two demonstrations of breaking a generic function using epicycles.}
-@caption{Epicycles and the Fourier series. Left: A demonstration of
-Mercury's epicycles relative to the ``center of the world'' by Qutb al-Din
-al-Shirazi (1236 -- 1311 A.D.) retrieved
-@url{https://commons.wikimedia.org/wiki/File:Ghotb2.jpg, from
-Wikipedia}. 
@url{https://commons.wikimedia.org/wiki/File:Fourier_series_square_wave_circles_animation.gif,
-Middle} and Right: How adding more epicycles (or terms in the Fourier
-series) will approximate functions. The
-@url{https://commons.wikimedia.org/wiki/File:Fourier_series_sawtooth_wave_circles_animation.gif,
-right} animation is also available.}
+@image{gnuastro-figures/epicycles, 15.2cm, , Middle ages epicycles along with 
two demonstrations of breaking a generic function using epicycles.}
+@caption{Epicycles and the Fourier series.
+Left: A demonstration of Mercury's epicycles relative to the ``center of the 
world'' by Qutb al-Din al-Shirazi (1236 -- 1311 A.D.) retrieved 
@url{https://commons.wikimedia.org/wiki/File:Ghotb2.jpg, from Wikipedia}.
+@url{https://commons.wikimedia.org/wiki/File:Fourier_series_square_wave_circles_animation.gif,
 Middle} and
+Right: How adding more epicycles (or terms in the Fourier series) will 
approximate functions.
+The 
@url{https://commons.wikimedia.org/wiki/File:Fourier_series_sawtooth_wave_circles_animation.gif,
 right} animation is also available.}
 @end float
 
-Like most aspects of mathematics, this process of interpreting
-everything in terms of circles, began for astronomical purposes. When
-astronomers noticed that the orbit of Mars and other outer planets,
-did not appear to be a simple circle (as everything should have been
-in the heavens). At some point during their orbit, the revolution of
-these planets would become slower, stop, go back a little (in what is
-known as the retrograde motion) and then continue going forward
-again.
-
-The correction proposed by Ptolemy (90 -- 168 A.D.) was the most
-agreed upon. He put the planets on Epicycles or circles whose center
-itself rotates on a circle whose center is the earth. Eventually, as
-observations became more and more precise, it was necessary to add
-more and more epicycles in order to explain the complex motions of the
-planets@footnote{See the Wikipedia page on ``Deferent and epicycle''
-for a more complete historical review.}. @ref{epicycle}(Left) shows an
-example depiction of the epicycles of Mercury in the late 13th
-century.
+Like most aspects of mathematics, this process of interpreting everything in 
terms of circles, began for astronomical purposes.
+When astronomers noticed that the orbit of Mars and other outer planets, did 
not appear to be a simple circle (as everything should have been in the 
heavens).
+At some point during their orbit, the revolution of these planets would become 
slower, stop, go back a little (in what is known as the retrograde motion) and 
then continue going forward again.
+
+The correction proposed by Ptolemy (90 -- 168 A.D.) was the most agreed upon.
+He put the planets on Epicycles or circles whose center itself rotates on a 
circle whose center is the earth.
+Eventually, as observations became more and more precise, it was necessary to 
add more and more epicycles in order to explain the complex motions of the 
planets@footnote{See the Wikipedia page on ``Deferent and epicycle'' for a more 
complete historical review.}.
+@ref{epicycle}(Left) shows an example depiction of the epicycles of Mercury in 
the late 13th century.
 
 @cindex Aristarchus of Samos
-Of course we now know that if they had abdicated the Earth from its
-throne in the center of the heavens and allowed the Sun to take its
-place, everything would become much simpler and true. But there wasn't
-enough observational evidence for changing the ``professional
-consensus'' of the time to this radical view suggested by a small
-minority@footnote{Aristarchus of Samos (310 -- 230 B.C.) appears to be
-one of the first people to suggest the Sun being in the center of the
-universe. This approach to science (that the standard model is defined
-by consensus) and the fact that this consensus might be completely
-wrong still applies equally well to our models of particle physics and
-cosmology today.}. So the pre-Galilean astronomers chose to keep Earth
-in the center and find a correction to the models (while keeping the
-heavens a purely ``circular'' order).
-
-The main reason we are giving this historical background which might
-appear off topic is to give historical evidence that while such
-``approximations'' do work and are very useful for pragmatic reasons
-(like measuring the calendar from the movement of astronomical
-bodies). They offer no physical insight. The astronomers who were
-involved with the Ptolemaic world view had to add a huge number of
-epicycles during the centuries after Ptolemy in order to explain more
-accurate observations. Finally the death knell of this world-view was
-Galileo's observations with his new instrument (the telescope). So the
-physical insight, which is what Astronomers and Physicists are
-interested in (as opposed to Mathematicians and Engineers who just
-like proving and optimizing or calculating!) comes from being creative
-and not limiting our selves to such approximations. Even when they
-work.
+Of course we now know that if they had abdicated the Earth from its throne in 
the center of the heavens and allowed the Sun to take its place, everything 
would become much simpler and true.
+But there wasn't enough observational evidence for changing the ``professional 
consensus'' of the time to this radical view suggested by a small 
minority@footnote{Aristarchus of Samos (310 -- 230 B.C.) appears to be one of 
the first people to suggest the Sun being in the center of the universe.
+This approach to science (that the standard model is defined by consensus) and 
the fact that this consensus might be completely wrong still applies equally 
well to our models of particle physics and cosmology today.}.
+So the pre-Galilean astronomers chose to keep Earth in the center and find a 
correction to the models (while keeping the heavens a purely ``circular'' 
order).
+
+The main reason we are giving this historical background which might appear 
off topic is to give historical evidence that while such ``approximations'' do 
work and are very useful for pragmatic reasons (like measuring the calendar 
from the movement of astronomical bodies).
+They offer no physical insight.
+The astronomers who were involved with the Ptolemaic world view had to add a 
huge number of epicycles during the centuries after Ptolemy in order to explain 
more accurate observations.
+Finally the death knell of this world-view was Galileo's observations with his 
new instrument (the telescope).
+So the physical insight, which is what Astronomers and Physicists are 
interested in (as opposed to Mathematicians and Engineers who just like proving 
and optimizing or calculating!) comes from being creative and not limiting our 
selves to such approximations.
+Even when they work.
 
 @node Circles and the complex plane, Fourier series, Fourier series historical 
background, Frequency domain and Fourier operations
 @subsubsection Circles and the complex plane
-Before going onto the derivation, it is also useful to review how the
-complex numbers and their plane relate to the circles we talked about
-above. The two schematics in the middle and right of @ref{epicycle}
-show how a 1D function of time can be made using the 2D real and
-imaginary surface. Seeing the animation in Wikipedia will really help
-in understanding this important concept. At each point in time, we
-take the vertical coordinate of the point and use it to find the value
-of the function at that point in time. @ref{iandtime} shows this
-relation with the axes marked.
+Before going onto the derivation, it is also useful to review how the complex 
numbers and their plane relate to the circles we talked about above.
+The two schematics in the middle and right of @ref{epicycle} show how a 1D 
function of time can be made using the 2D real and imaginary surface.
+Seeing the animation in Wikipedia will really help in understanding this 
important concept.
+At each point in time, we take the vertical coordinate of the point and use it 
to find the value of the function at that point in time.
+@ref{iandtime} shows this relation with the axes marked.
 
 @cindex Roger Cotes
 @cindex Cotes, Roger
@@ -14231,81 +10550,46 @@ relation with the axes marked.
 @cindex Euler, Leonhard
 @cindex Abraham de Moivre
 @cindex de Moivre, Abraham
-Leonhard Euler@footnote{Other forms of this equation were known before
-Euler. For example in 1707 A.D. (the year of Euler's birth) Abraham de
-Moivre (1667 -- 1754 A.D.)  showed that
-@mymath{(\cos{x}+i\sin{x})^n=\cos(nx)+i\sin(nx)}. In 1714 A.D., Roger Cotes
-(1682 -- 1716 A.D. a colleague of Newton who proofread the second edition
-of Principia) showed that: @mymath{ix=\ln(\cos{x}+i\sin{x})}.}  (1707 --
-1783 A.D.)  showed that the complex exponential (@mymath{e^{iv}} where
-@mymath{v} is real) is periodic and can be written as:
-@mymath{e^{iv}=\cos{v}+isin{v}}. Therefore
-@mymath{e^{iv+2\pi}=e^{iv}}. Later, Caspar Wessel (mathematician and
-cartographer 1745 -- 1818 A.D.)  showed how complex numbers can be
-displayed as vectors on a plane. Euler's identity might seem counter
-intuitive at first, so we will try to explain it geometrically (for deeper
-physical insight). On the real-imaginary 2D plane (like the left hand plot
-in each box of @ref{iandtime}), multiplying a number by @mymath{i} can be
-interpreted as rotating the point by @mymath{90} degrees (for example the
-value @mymath{3} on the real axis becomes @mymath{3i} on the imaginary
-axis). On the other hand,
-@mymath{e\equiv\lim_{n\rightarrow\infty}(1+{1\over n})^n}, therefore,
-defining @mymath{m\equiv nu}, we get:
+Leonhard Euler@footnote{Other forms of this equation were known before Euler.
+For example in 1707 A.D. (the year of Euler's birth) Abraham de Moivre (1667 
-- 1754 A.D.) showed that @mymath{(\cos{x}+i\sin{x})^n=\cos(nx)+i\sin(nx)}.
+In 1714 A.D., Roger Cotes (1682 -- 1716 A.D. a colleague of Newton who 
proofread the second edition of Principia) showed that: 
@mymath{ix=\ln(\cos{x}+i\sin{x})}.}  (1707 -- 1783 A.D.)  showed that the 
complex exponential (@mymath{e^{iv}} where @mymath{v} is real) is periodic and 
can be written as: @mymath{e^{iv}=\cos{v}+isin{v}}.
+Therefore @mymath{e^{iv+2\pi}=e^{iv}}.
+Later, Caspar Wessel (mathematician and cartographer 1745 -- 1818 A.D.)  
showed how complex numbers can be displayed as vectors on a plane.
+Euler's identity might seem counter intuitive at first, so we will try to 
explain it geometrically (for deeper physical insight).
+On the real-imaginary 2D plane (like the left hand plot in each box of 
@ref{iandtime}), multiplying a number by @mymath{i} can be interpreted as 
rotating the point by @mymath{90} degrees (for example the value @mymath{3} on 
the real axis becomes @mymath{3i} on the imaginary axis).
+On the other hand, @mymath{e\equiv\lim_{n\rightarrow\infty}(1+{1\over n})^n}, 
therefore, defining @mymath{m\equiv nu}, we get:
 
 @dispmath{e^{u}=\lim_{n\rightarrow\infty}\left(1+{1\over n}\right)^{nu}
                =\lim_{n\rightarrow\infty}\left(1+{u\over nu}\right)^{nu}
                =\lim_{m\rightarrow\infty}\left(1+{u\over m}\right)^{m}}
 
 @noindent
-Taking @mymath{u\equiv iv} the result can be written as a generic complex
-number (a function of @mymath{v}):
+Taking @mymath{u\equiv iv} the result can be written as a generic complex 
number (a function of @mymath{v}):
 
 @dispmath{e^{iv}=\lim_{m\rightarrow\infty}\left(1+i{v\over
                 m}\right)^{m}=a(v)+ib(v)}
 
 @noindent
-For @mymath{v=\pi}, a nice geometric animation of going to the limit can be
-seen @url{https://commons.wikimedia.org/wiki/File:ExpIPi.gif, on
-Wikipedia}. We see that @mymath{\lim_{m\rightarrow\infty}a(\pi)=-1}, while
-@mymath{\lim_{m\rightarrow\infty}b(\pi)=0}, which gives the famous
-@mymath{e^{i\pi}=-1} equation. The final value is the real number
-@mymath{-1}, however the distance of the polygon points traversed as
-@mymath{m\rightarrow\infty} is half the circumference of a circle or
-@mymath{\pi}, showing how @mymath{v} in the equation above can be
-interpreted as an angle in units of radians and therefore how
-@mymath{a(v)=cos(v)} and @mymath{b(v)=sin(v)}.
-
-Since @mymath{e^{iv}} is periodic (let's assume with a period of
-@mymath{T}), it is more clear to write it as @mymath{v\equiv{2{\pi}n\over
-T}t} (where @mymath{n} is an integer), so @mymath{e^{iv}=e^{i{2{\pi}n\over
-T}t}}. The advantage of this notation is that the period (@mymath{T}) is
-clearly visible and the frequency (@mymath{2{\pi}n \over T}, in units of
-1/cycle) is defined through the integer @mymath{n}. In this notation,
-@mymath{t} is in units of ``cycle''s.
-
-As we see from the examples in @ref{epicycle} and @ref{iandtime}, for each
-constituting frequency, we need a respective `magnitude' or the radius of
-the circle in order to accurately approximate the desired 1D function. The
-concepts of ``period'' and ``frequency'' are relatively easy to grasp when
-using temporal units like time because this is how we define them in
-every-day life. However, in an image (astronomical data), we are dealing
-with spatial units like distance. Therefore, by one ``period'' we mean the
-@emph{distance} at which the signal is identical and frequency is defined
-as the inverse of that spatial ``period''.  The complex circle of
-@ref{iandtime} can be thought of the Moon rotating about Earth which is
-rotating around the Sun; so the ``Real (signal)'' axis shows the Moon's
-position as seen by a distant observer on the Sun as time goes by.  Because
-of the scalar (not having any direction or vector) nature of time,
-@ref{iandtime} is easier to understand in units of time. When thinking
-about spatial units, mentally replace the ``Time (sec)'' axis with
-``Distance (meters)''. Because length has direction and is a vector,
-visualizing the rotation of the imaginary circle and the advance along the
-``Distance (meters)'' axis is not as simple as temporal units like time.
+For @mymath{v=\pi}, a nice geometric animation of going to the limit can be 
seen @url{https://commons.wikimedia.org/wiki/File:ExpIPi.gif, on Wikipedia}.
+We see that @mymath{\lim_{m\rightarrow\infty}a(\pi)=-1}, while 
@mymath{\lim_{m\rightarrow\infty}b(\pi)=0}, which gives the famous 
@mymath{e^{i\pi}=-1} equation.
+The final value is the real number @mymath{-1}, however the distance of the 
polygon points traversed as @mymath{m\rightarrow\infty} is half the 
circumference of a circle or @mymath{\pi}, showing how @mymath{v} in the 
equation above can be interpreted as an angle in units of radians and therefore 
how @mymath{a(v)=cos(v)} and @mymath{b(v)=sin(v)}.
+
+Since @mymath{e^{iv}} is periodic (let's assume with a period of @mymath{T}), 
it is more clear to write it as @mymath{v\equiv{2{\pi}n\over T}t} (where 
@mymath{n} is an integer), so @mymath{e^{iv}=e^{i{2{\pi}n\over T}t}}.
+The advantage of this notation is that the period (@mymath{T}) is clearly 
visible and the frequency (@mymath{2{\pi}n \over T}, in units of 1/cycle) is 
defined through the integer @mymath{n}.
+In this notation, @mymath{t} is in units of ``cycle''s.
+
+As we see from the examples in @ref{epicycle} and @ref{iandtime}, for each 
constituting frequency, we need a respective `magnitude' or the radius of the 
circle in order to accurately approximate the desired 1D function.
+The concepts of ``period'' and ``frequency'' are relatively easy to grasp when 
using temporal units like time because this is how we define them in every-day 
life.
+However, in an image (astronomical data), we are dealing with spatial units 
like distance.
+Therefore, by one ``period'' we mean the @emph{distance} at which the signal 
is identical and frequency is defined as the inverse of that spatial ``period''.
+The complex circle of @ref{iandtime} can be thought of the Moon rotating about 
Earth which is rotating around the Sun; so the ``Real (signal)'' axis shows the 
Moon's position as seen by a distant observer on the Sun as time goes by.
+Because of the scalar (not having any direction or vector) nature of time, 
@ref{iandtime} is easier to understand in units of time.
+When thinking about spatial units, mentally replace the ``Time (sec)'' axis 
with ``Distance (meters)''.
+Because length has direction and is a vector, visualizing the rotation of the 
imaginary circle and the advance along the ``Distance (meters)'' axis is not as 
simple as temporal units like time.
 
 @float Figure,iandtime
-@image{gnuastro-figures/iandtime, 15.2cm, , } @caption{Relation
-between the real (signal), imaginary (@mymath{i\equiv\sqrt{-1}}) and
-time axes at two snapshots of time.}
+@image{gnuastro-figures/iandtime, 15.2cm, , }
+@caption{Relation between the real (signal), imaginary 
(@mymath{i\equiv\sqrt{-1}}) and time axes at two snapshots of time.}
 @end float
 
 
@@ -14313,62 +10597,42 @@ time axes at two snapshots of time.}
 
 @node Fourier series, Fourier transform, Circles and the complex plane, 
Frequency domain and Fourier operations
 @subsubsection Fourier series
-In astronomical images, our variable (brightness, or number of
-photo-electrons, or signal to be more generic) is recorded over the 2D
-spatial surface of a camera pixel. However to make things easier to
-understand, here we will assume that the signal is recorded in 1D
-(assume one row of the 2D image pixels). Also for this section and the
-next (@ref{Fourier transform}) we will be talking about the signal
-before it is digitized or pixelated. Let's assume that we have the
-continuous function @mymath{f(l)} which is integrable in the interval
-@mymath{[l_0, l_0+L]} (always true in practical cases like
-images). Take @mymath{l_0} as the position of the first pixel in the
-assumed row of the image and @mymath{L} as the width of the image
-along that row. The units of @mymath{l_0} and @mymath{L} can be in any
-spatial units (for example meters) or an angular unit (like radians)
-multiplied by a fixed distance which is more common.
-
-To approximate @mymath{f(l)} over this interval, we need to find a set
-of frequencies and their corresponding `magnitude's (see @ref{Circles
-and the complex plane}). Therefore our aim is to show @mymath{f(l)} as
-the following sum of periodic functions:
+In astronomical images, our variable (brightness, or number of 
photo-electrons, or signal to be more generic) is recorded over the 2D spatial 
surface of a camera pixel.
+However to make things easier to understand, here we will assume that the 
signal is recorded in 1D (assume one row of the 2D image pixels).
+Also for this section and the next (@ref{Fourier transform}) we will be 
talking about the signal before it is digitized or pixelated.
+Let's assume that we have the continuous function @mymath{f(l)} which is 
integrable in the interval @mymath{[l_0, l_0+L]} (always true in practical 
cases like images).
+Take @mymath{l_0} as the position of the first pixel in the assumed row of the 
image and @mymath{L} as the width of the image along that row.
+The units of @mymath{l_0} and @mymath{L} can be in any spatial units (for 
example meters) or an angular unit (like radians) multiplied by a fixed 
distance which is more common.
+
+To approximate @mymath{f(l)} over this interval, we need to find a set of 
frequencies and their corresponding `magnitude's (see @ref{Circles and the 
complex plane}).
+Therefore our aim is to show @mymath{f(l)} as the following sum of periodic 
functions:
 
 
 @dispmath{
 f(l)=\displaystyle\sum_{n=-\infty}^{\infty}c_ne^{i{2{\pi}n\over L}l} }
 
 @noindent
-Note that the different frequencies (@mymath{2{\pi}n/L}, in units of cycles
-per meters for example) are not arbitrary. They are all integer multiples
-of the fundamental frequency of @mymath{\omega_0=2\pi/L}. Recall that
-@mymath{L} was the length of the signal we want to model. Therefore, we see
-that the smallest possible frequency (or the frequency resolution) in the
-end, depends on the length we observed the signal or @mymath{L}. In the
-case of each dimension on an image, this is the size of the image in the
-respective dimension. The frequencies have been defined in this
-``harmonic'' fashion to insure that the final sum is periodic outside of
-the @mymath{[l_0, l_0+L]} interval too. At this point, you might be
-thinking that the sky is not periodic with the same period as my camera's
-view angle. You are absolutely right! The important thing is that since
-your camera's observed region is the only region we are ``observing'' and
-will be using, the rest of the sky is irrelevant; so we can safely assume
-the sky is periodic outside of it. However, this working assumption will
-haunt us later in @ref{Edges in the frequency domain}.
-
-The frequencies are thus determined by definition. So all we need to
-do is to find the coefficients (@mymath{c_n}), or magnitudes, or radii
-of the circles for each frequency which is identified with the integer
-@mymath{n}.  Fourier's approach was to multiply both sides with a
-fixed term:
+Note that the different frequencies (@mymath{2{\pi}n/L}, in units of cycles 
per meters for example) are not arbitrary.
+They are all integer multiples of the fundamental frequency of 
@mymath{\omega_0=2\pi/L}.
+Recall that @mymath{L} was the length of the signal we want to model.
+Therefore, we see that the smallest possible frequency (or the frequency 
resolution) in the end, depends on the length we observed the signal or 
@mymath{L}.
+In the case of each dimension on an image, this is the size of the image in 
the respective dimension.
+The frequencies have been defined in this ``harmonic'' fashion to insure that 
the final sum is periodic outside of the @mymath{[l_0, l_0+L]} interval too.
+At this point, you might be thinking that the sky is not periodic with the 
same period as my camera's view angle.
+You are absolutely right! The important thing is that since your camera's 
observed region is the only region we are ``observing'' and will be using, the 
rest of the sky is irrelevant; so we can safely assume the sky is periodic 
outside of it.
+However, this working assumption will haunt us later in @ref{Edges in the 
frequency domain}.
+
+The frequencies are thus determined by definition.
+So all we need to do is to find the coefficients (@mymath{c_n}), or 
magnitudes, or radii of the circles for each frequency which is identified with 
the integer @mymath{n}.
+Fourier's approach was to multiply both sides with a fixed term:
 
 @dispmath{
 f(l)e^{-i{2{\pi}m\over 
L}l}=\displaystyle\sum_{n=-\infty}^{\infty}c_ne^{i{2{\pi}(n-m)\over L}l}
 }
 
 @noindent
-where @mymath{m>0}@footnote{ We could have assumed @mymath{m<0} and
-set the exponential to positive, but this is more clear.}. We can then
-integrate both sides over the observation period:
+where @mymath{m>0}@footnote{ We could have assumed @mymath{m<0} and set the 
exponential to positive, but this is more clear.}.
+We can then integrate both sides over the observation period:
 
 @dispmath{
 \int_{l_0}^{l_0+L}f(l)e^{-i{2{\pi}m\over L}l}dl
@@ -14376,22 +10640,19 @@ integrate both sides over the observation period:
 }
 
 @noindent
-Both @mymath{n} and @mymath{m} are positive integers. Also, we know
-that a complex exponential is periodic so after one period
-(@mymath{L}) it comes back to its starting point. Therefore
-@mymath{\int_{l_0}^{l_0+L}e^{2{\pi}k/L}dl=0} for any @mymath{k>0}.
-However, when @mymath{k=0}, this integral becomes:
-@mymath{\int_{l_0}^{l_0+T}e^0dt=\int_{l_0}^{l_0+T}dt=T}. Hence since
-the integral will be zero for all @mymath{n{\neq}m}, we get:
+Both @mymath{n} and @mymath{m} are positive integers.
+Also, we know that a complex exponential is periodic so after one period 
(@mymath{L}) it comes back to its starting point.
+Therefore @mymath{\int_{l_0}^{l_0+L}e^{2{\pi}k/L}dl=0} for any @mymath{k>0}.
+However, when @mymath{k=0}, this integral becomes: 
@mymath{\int_{l_0}^{l_0+T}e^0dt=\int_{l_0}^{l_0+T}dt=T}.
+Hence since the integral will be zero for all @mymath{n{\neq}m}, we get:
 
 @dispmath{
 
\displaystyle\sum_{n=-\infty}^{\infty}c_n\int_{l_0}^{l_0+T}e^{i{2{\pi}(n-m)\over
 L}l}dl=Lc_m }
 
 @noindent
-The origin of the axis is fundamentally an arbitrary position. So
-let's set it to the start of the image such that @mymath{l_0=0}. So we
-can find the ``magnitude'' of the frequency @mymath{2{\pi}m/L} within
-@mymath{f(l)} through the relation:
+The origin of the axis is fundamentally an arbitrary position.
+So let's set it to the start of the image such that @mymath{l_0=0}.
+So we can find the ``magnitude'' of the frequency @mymath{2{\pi}m/L} within 
@mymath{f(l)} through the relation:
 
 @dispmath{ c_m={1\over L}\int_{0}^{L}f(l)e^{-i{2{\pi}m\over L}l}dl }
 
@@ -14399,18 +10660,11 @@ can find the ``magnitude'' of the frequency 
@mymath{2{\pi}m/L} within
 
 @node Fourier transform, Dirac delta and comb, Fourier series, Frequency 
domain and Fourier operations
 @subsubsection Fourier transform
-In @ref{Fourier series}, we had to assume that the function is
-periodic outside of the desired interval with a period of
-@mymath{L}. Therefore, assuming that @mymath{L\rightarrow\infty} will
-allow us to work with any function. However, with this approximation,
-the fundamental frequency (@mymath{\omega_0}) or the frequency
-resolution that we discussed in @ref{Fourier series} will tend to
-zero: @mymath{\omega_0\rightarrow0}. In the equation to find
-@mymath{c_m}, every @mymath{m} represented a frequency (multiple of
-@mymath{\omega_0}) and the integration on @mymath{l} removes the
-dependence of the right side of the equation on @mymath{l}, making it
-only a function of @mymath{m} or frequency. Let's define the following
-two variables:
+In @ref{Fourier series}, we had to assume that the function is periodic 
outside of the desired interval with a period of @mymath{L}.
+Therefore, assuming that @mymath{L\rightarrow\infty} will allow us to work 
with any function.
+However, with this approximation, the fundamental frequency 
(@mymath{\omega_0}) or the frequency resolution that we discussed in 
@ref{Fourier series} will tend to zero: @mymath{\omega_0\rightarrow0}.
+In the equation to find @mymath{c_m}, every @mymath{m} represented a frequency 
(multiple of @mymath{\omega_0}) and the integration on @mymath{l} removes the 
dependence of the right side of the equation on @mymath{l}, making it only a 
function of @mymath{m} or frequency.
+Let's define the following two variables:
 
 @dispmath{\omega{\equiv}m\omega_0={2{\pi}m\over L}}
 
@@ -14420,22 +10674,13 @@ two variables:
 The equation to find the coefficients of each frequency in
 @ref{Fourier series} thus becomes:
 
-@dispmath{ F(\omega)=\int_{-\infty}^{\infty}f(l)e^{-i{\omega}l}dl.  }
+@dispmath{ F(\omega)=\int_{-\infty}^{\infty}f(l)e^{-i{\omega}l}dl.}
 
 @noindent
-The function @mymath{F(\omega)} is thus the @emph{Fourier transform}
-of @mymath{f(l)} in the frequency domain. So through this
-transformation, we can find (analyze) the magnitudes of the
-constituting frequencies or the value in the frequency
-space@footnote{As we discussed before, this `magnitude' can be
-interpreted as the radius of the circle rotating at this frequency in
-the epicyclic interpretation of the Fourier series, see @ref{epicycle}
-and @ref{iandtime}.}  of our spatial input function. The great thing
-is that we can also do the reverse and later synthesize the input
-function from its Fourier transform. Let's do it: with the
-approximations above, multiply the right side of the definition of the
-Fourier Series (@ref{Fourier series}) with
-@mymath{1=L/L=({\omega_0}L)/(2\pi)}:
+The function @mymath{F(\omega)} is thus the @emph{Fourier transform} of 
@mymath{f(l)} in the frequency domain.
+So through this transformation, we can find (analyze) the magnitudes of the 
constituting frequencies or the value in the frequency space@footnote{As we 
discussed before, this `magnitude' can be interpreted as the radius of the 
circle rotating at this frequency in the epicyclic interpretation of the 
Fourier series, see @ref{epicycle} and @ref{iandtime}.}  of our spatial input 
function.
+The great thing is that we can also do the reverse and later synthesize the 
input function from its Fourier transform.
+Let's do it: with the approximations above, multiply the right side of the 
definition of the Fourier Series (@ref{Fourier series}) with 
@mymath{1=L/L=({\omega_0}L)/(2\pi)}:
 
 @dispmath{ f(l)={1\over
 2\pi}\displaystyle\sum_{n=-\infty}^{\infty}Lc_ne^{{2{\pi}in\over
@@ -14445,53 +10690,40 @@ L}l}\omega_0={1\over
 
 
 @noindent
-To find the right most side of this equation, we renamed
-@mymath{\omega_0} as @mymath{\Delta\omega} because it was our
-resolution, @mymath{2{\pi}n/L} was written as @mymath{\omega} and
-finally, @mymath{Lc_n} was written as @mymath{F(\omega)} as we defined
-above. Now, as @mymath{L\rightarrow\infty},
-@mymath{\Delta\omega\rightarrow0} so we can write:
+To find the right most side of this equation, we renamed @mymath{\omega_0} as 
@mymath{\Delta\omega} because it was our resolution, @mymath{2{\pi}n/L} was 
written as @mymath{\omega} and finally, @mymath{Lc_n} was written as 
@mymath{F(\omega)} as we defined above.
+Now, as @mymath{L\rightarrow\infty}, @mymath{\Delta\omega\rightarrow0} so we 
can write:
 
 @dispmath{ f(l)={1\over
   2\pi}\int_{-\infty}^{\infty}F(\omega)e^{i{\omega}l}d\omega }
 
-Together, these two equations provide us with a very powerful set of
-tools that we can use to process (analyze) and recreate (synthesize)
-the input signal. Through the first equation, we can break up our
-input function into its constituent frequencies and analyze it, hence
-it is also known as @emph{analysis}. Using the second equation, we can
-synthesize or make the input function from the known frequencies and
-their magnitudes. Thus it is known as @emph{synthesis}. Here, we
-symbolize the Fourier transform (analysis) and its inverse (synthesis)
-of a function @mymath{f(l)} and its Fourier Transform
-@mymath{F(\omega)} as @mymath{{\cal F}[f]} and @mymath{{\cal F}^{-1}[F]}.
+Together, these two equations provide us with a very powerful set of tools 
that we can use to process (analyze) and recreate (synthesize) the input signal.
+Through the first equation, we can break up our input function into its 
constituent frequencies and analyze it, hence it is also known as 
@emph{analysis}.
+Using the second equation, we can synthesize or make the input function from 
the known frequencies and their magnitudes.
+Thus it is known as @emph{synthesis}.
+Here, we symbolize the Fourier transform (analysis) and its inverse 
(synthesis) of a function @mymath{f(l)} and its Fourier Transform 
@mymath{F(\omega)} as @mymath{{\cal F}[f]} and @mymath{{\cal F}^{-1}[F]}.
 
 
 @node Dirac delta and comb, Convolution theorem, Fourier transform, Frequency 
domain and Fourier operations
 @subsubsection Dirac delta and comb
 
-The Dirac @mymath{\delta} (delta) function (also known as an impulse)
-is the way that we convert a continuous function into a discrete
-one. It is defined to satisfy the following integral:
+The Dirac @mymath{\delta} (delta) function (also known as an impulse) is the 
way that we convert a continuous function into a discrete one.
+It is defined to satisfy the following integral:
 
 @dispmath{\int_{-\infty}^{\infty}\delta(l)dl=1}
 
 @noindent
-When integrated with another function, it gives that function's value
-at @mymath{l=0}:
+When integrated with another function, it gives that function's value at 
@mymath{l=0}:
 
 @dispmath{\int_{-\infty}^{\infty}f(l)\delta(l)dt=f(0)}
 
 @noindent
-An impulse positioned at another point (say @mymath{l_0}) is written
-as @mymath{\delta(l-l_0)}:
+An impulse positioned at another point (say @mymath{l_0}) is written as 
@mymath{\delta(l-l_0)}:
 
 @dispmath{\int_{-\infty}^{\infty}f(l)\delta(l-l_0)dt=f(l_0)}
 
 @noindent
-The Dirac @mymath{\delta} function also operates similarly if we use
-summations instead of integrals. The Fourier transform of the delta
-function is:
+The Dirac @mymath{\delta} function also operates similarly if we use 
summations instead of integrals.
+The Fourier transform of the delta function is:
 
 @dispmath{{\cal 
F}[\delta(l)]=\int_{-\infty}^{\infty}\delta(l)e^{-i{\omega}l}dl=e^{-i{\omega}0}=1}
 
@@ -14507,18 +10739,12 @@ impulses separated by @mymath{P}:
 
 
 @noindent
-@mymath{P} is chosen to represent ``pixel width'' later in
-@ref{Sampling theorem}. Therefore the Dirac comb is periodic with a
-period of @mymath{P}. We have intentionally used a different name for
-the period of the Dirac comb compared to the input signal's length of
-observation that we showed with @mymath{L} in @ref{Fourier
-series}. This difference is highlighted here to avoid confusion later
-when these two periods are needed together in @ref{Discrete Fourier
-transform}. The Fourier transform of the Dirac comb will be necessary
-in @ref{Sampling theorem}, so let's derive it. By its definition, it
-is periodic, with a period of @mymath{P}, so the Fourier coefficients
-of its Fourier Series (@ref{Fourier series}) can be calculated within
-one period:
+@mymath{P} is chosen to represent ``pixel width'' later in @ref{Sampling 
theorem}.
+Therefore the Dirac comb is periodic with a period of @mymath{P}.
+We have intentionally used a different name for the period of the Dirac comb 
compared to the input signal's length of observation that we showed with 
@mymath{L} in @ref{Fourier series}.
+This difference is highlighted here to avoid confusion later when these two 
periods are needed together in @ref{Discrete Fourier transform}.
+The Fourier transform of the Dirac comb will be necessary in @ref{Sampling 
theorem}, so let's derive it.
+By its definition, it is periodic, with a period of @mymath{P}, so the Fourier 
coefficients of its Fourier Series (@ref{Fourier series}) can be calculated 
within one period:
 
 @dispmath{{\rm 
III}_P=\displaystyle\sum_{n=-\infty}^{\infty}c_ne^{i{2{\pi}n\over
 P}l}}
@@ -14542,15 +10768,9 @@ So we can write the Fourier transform of the Dirac 
comb as:
 
 
 @noindent
-In the last step, we used the fact that the complex exponential is a
-periodic function, that @mymath{n} is an integer and that as we
-defined in @ref{Fourier transform}, @mymath{\omega{\equiv}m\omega_0},
-where @mymath{m} was an integer. The integral will be zero for any
-@mymath{\omega} that is not equal to @mymath{2{\pi}n/P}, a more
-complete explanation can be seen in @ref{Fourier series}. Therefore,
-while in the spatial domain the impulses had spacing of @mymath{P}
-(meters for example), in the frequency space, the spacing between the
-different impulses are @mymath{2\pi/P} cycles per meters.
+In the last step, we used the fact that the complex exponential is a periodic 
function, that @mymath{n} is an integer and that as we defined in @ref{Fourier 
transform}, @mymath{\omega{\equiv}m\omega_0}, where @mymath{m} was an integer.
+The integral will be zero for any @mymath{\omega} that is not equal to 
@mymath{2{\pi}n/P}, a more complete explanation can be seen in @ref{Fourier 
series}.
+Therefore, while in the spatial domain the impulses had spacing of @mymath{P} 
(meters for example), in the frequency space, the spacing between the different 
impulses are @mymath{2\pi/P} cycles per meters.
 
 
 @node Convolution theorem, Sampling theorem, Dirac delta and comb, Frequency 
domain and Fourier operations
@@ -14564,9 +10784,8 @@ 
c(l)\equiv[f{\ast}h](l)=\int_{-\infty}^{\infty}f(\tau)h(l-\tau)d\tau
 }
 
 @noindent
-See @ref{Convolution process} for a more detailed physical (pixel
-based) interpretation of this definition. The Fourier transform of
-convolution (@mymath{C(\omega)}) can be written as:
+See @ref{Convolution process} for a more detailed physical (pixel based) 
interpretation of this definition.
+The Fourier transform of convolution (@mymath{C(\omega)}) can be written as:
 
 @dispmath{
   C(\omega)=\int_{-\infty}^{\infty}[f{\ast}h](l)e^{-i{\omega}l}dl=
@@ -14584,34 +10803,29 @@ becomes:
 }
 
 @noindent
-where @mymath{H(\omega)} is the Fourier transform of
-@mymath{h(l)}. Substituting this result for the inner integral above,
-we get:
+where @mymath{H(\omega)} is the Fourier transform of @mymath{h(l)}.
+Substituting this result for the inner integral above, we get:
 
 @dispmath{
 
C(\omega)=H(\omega)\int_{-\infty}^{\infty}f(\tau)e^{-i{\omega}\tau}d\tau=H(\omega)F(\omega)=F(\omega)H(\omega)
 }
 
 @noindent
-where @mymath{F(\omega)} is the Fourier transform of @mymath{f(l)}. So
-multiplying the Fourier transform of two functions individually, we
-get the Fourier transform of their convolution. The convolution
-theorem also proves a relation between the convolutions in the
-frequency space. Let's define:
+where @mymath{F(\omega)} is the Fourier transform of @mymath{f(l)}.
+So multiplying the Fourier transform of two functions individually, we get the 
Fourier transform of their convolution.
+The convolution theorem also proves a relation between the convolutions in the 
frequency space.
+Let's define:
 
 @dispmath{D(\omega){\equiv}F(\omega){\ast}H(\omega)}
 
 @noindent
-Applying the inverse Fourier Transform or synthesis equation
-(@ref{Fourier transform}) to both sides and following the same steps
-above, we get:
+Applying the inverse Fourier Transform or synthesis equation (@ref{Fourier 
transform}) to both sides and following the same steps above, we get:
 
 @dispmath{d(l)=f(l)h(l)}
 
 @noindent
-Where @mymath{d(l)} is the inverse Fourier transform of
-@mymath{D(\omega)}. We can therefore re-write the two equations above
-formally as the convolution theorem:
+Where @mymath{d(l)} is the inverse Fourier transform of @mymath{D(\omega)}.
+We can therefore re-write the two equations above formally as the convolution 
theorem:
 
 @dispmath{
   {\cal F}[f{\ast}h]={\cal F}[f]{\cal F}[h]
@@ -14621,24 +10835,15 @@ formally as the convolution theorem:
   {\cal F}[fh]={\cal F}[f]\ast{\cal F}[h]
 }
 
-Besides its usefulness in blurring an image by convolving it with a
-given kernel, the convolution theorem also enables us to do another
-very useful operation in data analysis: to match the blur (or PSF)
-between two images taken with different telescopes/cameras or under
-different atmospheric conditions. This process is also known as
-de-convolution. Let's take @mymath{f(l)} as the image with a narrower
-PSF (less blurry) and @mymath{c(l)} as the image with a wider PSF
-which appears more blurred. Also let's take @mymath{h(l)} to represent
-the kernel that should be convolved with the sharper image to create
-the more blurry image. Above, we proved the relation between these
-three images through the convolution theorem. But there, we assumed
-that @mymath{f(l)} and @mymath{h(l)} are known (given) and the
-convolved image is desired.
-
-In de-convolution, we have @mymath{f(l)} --the sharper image-- and
-@mymath{f*h(l)} --the more blurry image-- and we want to find the kernel
-@mymath{h(l)}. The solution is a direct result of the convolution
-theorem:
+Besides its usefulness in blurring an image by convolving it with a given 
kernel, the convolution theorem also enables us to do another very useful 
operation in data analysis: to match the blur (or PSF) between two images taken 
with different telescopes/cameras or under different atmospheric conditions.
+This process is also known as de-convolution.
+Let's take @mymath{f(l)} as the image with a narrower PSF (less blurry) and 
@mymath{c(l)} as the image with a wider PSF which appears more blurred.
+Also let's take @mymath{h(l)} to represent the kernel that should be convolved 
with the sharper image to create the more blurry image.
+Above, we proved the relation between these three images through the 
convolution theorem.
+But there, we assumed that @mymath{f(l)} and @mymath{h(l)} are known (given) 
and the convolved image is desired.
+
+In de-convolution, we have @mymath{f(l)} --the sharper image-- and 
@mymath{f*h(l)} --the more blurry image-- and we want to find the kernel 
@mymath{h(l)}.
+The solution is a direct result of the convolution theorem:
 
 @dispmath{
   {\cal F}[h]={{\cal F}[f{\ast}h]\over {\cal F}[f]}
@@ -14653,13 +10858,10 @@ While this works really nice, it has two problems:
 @itemize
 
 @item
-If @mymath{{\cal F}[f]} has any zero values, then the inverse Fourier
-transform will not be a number!
+If @mymath{{\cal F}[f]} has any zero values, then the inverse Fourier 
transform will not be a number!
 
 @item
-If there is significant noise in the image, then the high frequencies
-of the noise are going to significantly reduce the quality of the
-final result.
+If there is significant noise in the image, then the high frequencies of the 
noise are going to significantly reduce the quality of the final result.
 
 @end itemize
 
@@ -14669,14 +10871,9 @@ 
algorithm@footnote{@url{https://en.wikipedia.org/wiki/Wiener_deconvolution}}.
 @node Sampling theorem, Discrete Fourier transform, Convolution theorem, 
Frequency domain and Fourier operations
 @subsubsection Sampling theorem
 
-Our mathematical functions are continuous, however, our data
-collecting and measuring tools are discrete. Here we want to give a
-mathematical formulation for digitizing the continuous mathematical
-functions so that later, we can retrieve the continuous function from
-the digitized recorded input. Assuming that we have a continuous
-function @mymath{f(l)}, then we can define @mymath{f_s(l)} as the
-`sampled' @mymath{f(l)} through the Dirac comb (see @ref{Dirac delta
-and comb}):
+Our mathematical functions are continuous, however, our data collecting and 
measuring tools are discrete.
+Here we want to give a mathematical formulation for digitizing the continuous 
mathematical functions so that later, we can retrieve the continuous function 
from the digitized recorded input.
+Assuming that we have a continuous function @mymath{f(l)}, then we can define 
@mymath{f_s(l)} as the `sampled' @mymath{f(l)} through the Dirac comb (see 
@ref{Dirac delta and comb}):
 
 @dispmath{
 f_s(l)=f(l){\rm III}_P=\displaystyle\sum_{n=-\infty}^{\infty}f(l)\delta(l-nP)
@@ -14688,15 +10885,11 @@ image), where @mymath{k} is an integer, can thus be 
represented as:
 
 
@dispmath{f_k=\int_{-\infty}^{\infty}f_s(l)dl=\int_{-\infty}^{\infty}f(l)\delta(l-kP)dt=f(kP)}
 
-Note that in practice, our discrete data points are not found in this
-fashion. Each detector pixel (in an image for example) has an area and
-averages the signal it receives over that area, not a mathematical point as
-the Dirac @mymath{\delta} function defines. However, as long as the
-variation in the signal over one detector pixel is not significant, this
-can be a good approximation. Having put this issue to the side, we can now
-try to find the relation between the Fourier transforms of the un-sampled
-@mymath{f(l)} and the sampled @mymath{f_s(l)}. For a more clear notation,
-let's define:
+Note that in practice, our discrete data points are not found in this fashion.
+Each detector pixel (in an image for example) has an area and averages the 
signal it receives over that area, not a mathematical point as the Dirac 
@mymath{\delta} function defines.
+However, as long as the variation in the signal over one detector pixel is not 
significant, this can be a good approximation.
+Having put this issue to the side, we can now try to find the relation between 
the Fourier transforms of the un-sampled @mymath{f(l)} and the sampled 
@mymath{f_s(l)}.
+For a more clear notation, let's define:
 
 @dispmath{F_s(\omega)\equiv{\cal F}[f_s]}
 
@@ -14720,115 +10913,69 @@ F_s(\omega) &= 
\int_{-\infty}^{\infty}F(\omega)D(\omega-\mu)d\mu \cr
    \omega-{2{\pi}n\over P}\right).\cr }
 }
 
-@mymath{F(\omega)} was only a simple function, see
-@ref{samplingfreq}(left). However, from the sampled Fourier transform
-function we see that @mymath{F_s(\omega)} is the superposition of
-infinite copies of @mymath{F(\omega)} that have been shifted, see
-@ref{samplingfreq}(right). From the equation, it is clear that the
-shift in each copy is @mymath{2\pi/P}.
+@mymath{F(\omega)} was only a simple function, see @ref{samplingfreq}(left).
+However, from the sampled Fourier transform function we see that 
@mymath{F_s(\omega)} is the superposition of infinite copies of 
@mymath{F(\omega)} that have been shifted, see @ref{samplingfreq}(right).
+From the equation, it is clear that the shift in each copy is @mymath{2\pi/P}.
 
 @float Figure,samplingfreq
-@image{gnuastro-figures/samplingfreq, 15.2cm, , } @caption{Sampling
-    causes infinite repetition in the frequency domain. FT is an
-    abbreviation for `Fourier transform'. @mymath{\omega_m} represents
-    the maximum frequency present in the input. @mymath{F(\omega)} is
-    only symmetric on both sides of 0 when the input is real (not
-    complex). In general @mymath{F(\omega)} is complex and thus cannot
-    be simply plotted like this. Here we have assumed a real Gaussian
-    @mymath{f(t)} which has produced a Gaussian @mymath{F(\omega)}.}
+@image{gnuastro-figures/samplingfreq, 15.2cm, , } @caption{Sampling causes 
infinite repetition in the frequency domain.
+FT is an abbreviation for `Fourier transform'.
+@mymath{\omega_m} represents the maximum frequency present in the input.
+@mymath{F(\omega)} is only symmetric on both sides of 0 when the input is real 
(not complex).
+In general @mymath{F(\omega)} is complex and thus cannot be simply plotted 
like this.
+Here we have assumed a real Gaussian @mymath{f(t)} which has produced a 
Gaussian @mymath{F(\omega)}.}
 @end float
 
-The input @mymath{f(l)} can have any distribution of frequencies in
-it. In the example of @ref{samplingfreq}(left), the input consisted of
-a range of frequencies equal to
-@mymath{\Delta\omega=2\omega_m}. Fortunately as
-@ref{samplingfreq}(right) shows, the assumed pixel size (@mymath{P})
-we used to sample this hypothetical function was such that
-@mymath{2\pi/P>\Delta\omega}. The consequence is that each copy of
-@mymath{F(\omega)} has become completely separate from the surrounding
-copies. Such a digitized (sampled) data set is thus called
-@emph{over-sampled}. When @mymath{2\pi/P=\Delta\omega}, @mymath{P} is
-just small enough to finely separate even the largest frequencies in
-the input signal and thus it is known as
-@emph{critically-sampled}. Finally if @mymath{2\pi/P<\Delta\omega} we
-are dealing with an @emph{under-sampled} data set. In an under-sampled
-data set, the separate copies of @mymath{F(\omega)} are going to
-overlap and this will deprive us of recovering high constituent
-frequencies of @mymath{f(l)}. The effects of under-sampling in an
-image with high rates of change (for example a brick wall imaged from
-a distance) can clearly be visually seen and is known as
-@emph{aliasing}.
-
-When the input @mymath{f(l)} is composed of a finite range of
-frequencies, @mymath{f(l)} is known as a @emph{band-limited}
-function. The example in @ref{samplingfreq}(left) was a nice
-demonstration of such a case: for all @mymath{\omega<-\omega_m} or
-@mymath{\omega>\omega_m}, we have @mymath{F(\omega)=0}. Therefore,
-when the input function is band-limited and our detector's pixels are
-placed such that we have critically (or over-) sampled it, then we can
-exactly reproduce the continuous @mymath{f(l)} from the discrete or
-digitized samples. To do that, we just have to isolate one copy of
-@mymath{F(\omega)} from the infinite copies and take its inverse
-Fourier transform.
-
-This ability to exactly reproduce the continuous input from the
-sampled or digitized data leads us to the @emph{sampling theorem}
-which connects the inherent property of the continuous signal (its
-maximum frequency) to that of the detector (the spacing between its
-pixels). The sampling theorem states that the full (continuous) signal
-can be recovered when the pixel size (@mymath{P}) and the maximum
-constituent frequency in the signal (@mymath{\omega_m}) have the
-following relation@footnote{This equation is also shown in some places
-without the @mymath{2\pi}. Whether @mymath{2\pi} is included or not
-depends on how you define the frequency}:
+The input @mymath{f(l)} can have any distribution of frequencies in it.
+In the example of @ref{samplingfreq}(left), the input consisted of a range of 
frequencies equal to @mymath{\Delta\omega=2\omega_m}.
+Fortunately as @ref{samplingfreq}(right) shows, the assumed pixel size 
(@mymath{P}) we used to sample this hypothetical function was such that 
@mymath{2\pi/P>\Delta\omega}.
+The consequence is that each copy of @mymath{F(\omega)} has become completely 
separate from the surrounding copies.
+Such a digitized (sampled) data set is thus called @emph{over-sampled}.
+When @mymath{2\pi/P=\Delta\omega}, @mymath{P} is just small enough to finely 
separate even the largest frequencies in the input signal and thus it is known 
as @emph{critically-sampled}.
+Finally if @mymath{2\pi/P<\Delta\omega} we are dealing with an 
@emph{under-sampled} data set.
+In an under-sampled data set, the separate copies of @mymath{F(\omega)} are 
going to overlap and this will deprive us of recovering high constituent 
frequencies of @mymath{f(l)}.
+The effects of under-sampling in an image with high rates of change (for 
example a brick wall imaged from a distance) can clearly be visually seen and 
is known as @emph{aliasing}.
+
+When the input @mymath{f(l)} is composed of a finite range of frequencies, 
@mymath{f(l)} is known as a @emph{band-limited} function.
+The example in @ref{samplingfreq}(left) was a nice demonstration of such a 
case: for all @mymath{\omega<-\omega_m} or @mymath{\omega>\omega_m}, we have 
@mymath{F(\omega)=0}.
+Therefore, when the input function is band-limited and our detector's pixels 
are placed such that we have critically (or over-) sampled it, then we can 
exactly reproduce the continuous @mymath{f(l)} from the discrete or digitized 
samples.
+To do that, we just have to isolate one copy of @mymath{F(\omega)} from the 
infinite copies and take its inverse Fourier transform.
+
+This ability to exactly reproduce the continuous input from the sampled or 
digitized data leads us to the @emph{sampling theorem} which connects the 
inherent property of the continuous signal (its maximum frequency) to that of 
the detector (the spacing between its pixels).
+The sampling theorem states that the full (continuous) signal can be recovered 
when the pixel size (@mymath{P}) and the maximum constituent frequency in the 
signal (@mymath{\omega_m}) have the following relation@footnote{This equation 
is also shown in some places without the @mymath{2\pi}.
+Whether @mymath{2\pi} is included or not depends on how you define the 
frequency}:
 
 @dispmath{{2\pi\over P}>2\omega_m}
 
 @noindent
-This relation was first formulated by Harry Nyquist (1889 -- 1976
-A.D.) in 1928 and formally proved in 1949 by Claude E. Shannon (1916
--- 2001 A.D.) in what is now known as the Nyquist-Shannon sampling
-theorem. In signal processing, the signal is produced (synthesized) by
-a transmitter and is received and de-coded (analyzed) by a
-receiver. Therefore producing a band-limited signal is necessary.
-
-In astronomy, we do not produce the shapes of our targets, we are only
-observers. Galaxies can have any shape and size, therefore ideally,
-our signal is not band-limited. However, since we are always confined
-to observing through an aperture, the aperture will cause a point
-source (for which @mymath{\omega_m=\infty}) to be spread over several
-pixels. This spread is quantitatively known as the point spread
-function or PSF. This spread does blur the image which is undesirable;
-however, for this analysis it produces the positive outcome that there
-will be a finite @mymath{\omega_m}. Though we should caution that any
-detector will have noise which will add lots of very high frequency
-(ideally infinite) changes between the pixels. However, the
-coefficients of those noise frequencies are usually exceedingly small.
+This relation was first formulated by Harry Nyquist (1889 -- 1976 A.D.) in 
1928 and formally proved in 1949 by Claude E. Shannon (1916 -- 2001 A.D.) in 
what is now known as the Nyquist-Shannon sampling theorem.
+In signal processing, the signal is produced (synthesized) by a transmitter 
and is received and de-coded (analyzed) by a receiver.
+Therefore producing a band-limited signal is necessary.
+
+In astronomy, we do not produce the shapes of our targets, we are only 
observers.
+Galaxies can have any shape and size, therefore ideally, our signal is not 
band-limited.
+However, since we are always confined to observing through an aperture, the 
aperture will cause a point source (for which @mymath{\omega_m=\infty}) to be 
spread over several pixels.
+This spread is quantitatively known as the point spread function or PSF.
+This spread does blur the image which is undesirable; however, for this 
analysis it produces the positive outcome that there will be a finite 
@mymath{\omega_m}.
+Though we should caution that any detector will have noise which will add lots 
of very high frequency (ideally infinite) changes between the pixels.
+However, the coefficients of those noise frequencies are usually exceedingly 
small.
 
 @node Discrete Fourier transform, Fourier operations in two dimensions, 
Sampling theorem, Frequency domain and Fourier operations
 @subsubsection Discrete Fourier transform
 
-As we have stated several times so far, the input image is a
-digitized, pixelated or discrete array of values (@mymath{f_s(l)}, see
-@ref{Sampling theorem}). The input is not a continuous function. Also,
-all our numerical calculations can only be done on a sampled, or
-discrete Fourier transform. Note that @mymath{F_s(\omega)} is not
-discrete, it is continuous. One way would be to find the analytic
-@mymath{F_s(\omega)}, then sample it at any desired
-``freq-pixel''@footnote{We are using the made-up word ``freq-pixel''
-so they are not confused with spatial domain ``pixels''.}
-spacing. However, this process would involve two steps of operations
-and computers in particular are not too good at analytic operations
-for the first step. So here, we will derive a method to directly find
-the `freq-pixel'ated @mymath{F_s(\omega)} from the pixelated
-@mymath{f_s(l)}. Let's start with the definition of the Fourier
-transform (see @ref{Fourier transform}):
+As we have stated several times so far, the input image is a digitized, 
pixelated or discrete array of values (@mymath{f_s(l)}, see @ref{Sampling 
theorem}).
+The input is not a continuous function.
+Also, all our numerical calculations can only be done on a sampled, or 
discrete Fourier transform.
+Note that @mymath{F_s(\omega)} is not discrete, it is continuous.
+One way would be to find the analytic @mymath{F_s(\omega)}, then sample it at 
any desired ``freq-pixel''@footnote{We are using the made-up word 
``freq-pixel'' so they are not confused with spatial domain ``pixels''.} 
spacing.
+However, this process would involve two steps of operations and computers in 
particular are not too good at analytic operations for the first step.
+So here, we will derive a method to directly find the `freq-pixel'ated 
@mymath{F_s(\omega)} from the pixelated @mymath{f_s(l)}.
+Let's start with the definition of the Fourier transform (see @ref{Fourier 
transform}):
 
 @dispmath{F_s(\omega)=\int_{-\infty}^{\infty}f_s(l)e^{-i{\omega}l}dl }
 
 @noindent
-From the definition of @mymath{f_s(\omega)} (using @mymath{x} instead
-of @mymath{n}) we get:
+From the definition of @mymath{f_s(\omega)} (using @mymath{x} instead of 
@mymath{n}) we get:
 
 @dispmath{
 \eqalign{
@@ -14840,14 +10987,11 @@ of @mymath{n}) we get:
 }
 
 @noindent
-Where @mymath{f_x} is the value of @mymath{f(l)} on the point
-@mymath{x} or the value of the @mymath{x}th pixel. As shown in
-@ref{Sampling theorem} this function is infinitely periodic with a
-period of @mymath{2\pi/P}. So all we need is the values within one
-period: @mymath{0<\omega<2\pi/P}, see @ref{samplingfreq}. We want
-@mymath{X} samples within this interval, so the frequency difference
-between each frequency sample or freq-pixel is @mymath{1/XP}. Hence we
-will evaluate the equation above on the points at:
+Where @mymath{f_x} is the value of @mymath{f(l)} on the point @mymath{x} or 
the value of the @mymath{x}th pixel.
+As shown in @ref{Sampling theorem} this function is infinitely periodic with a 
period of @mymath{2\pi/P}.
+So all we need is the values within one period: @mymath{0<\omega<2\pi/P}, see 
@ref{samplingfreq}.
+We want @mymath{X} samples within this interval, so the frequency difference 
between each frequency sample or freq-pixel is @mymath{1/XP}.
+Hence we will evaluate the equation above on the points at:
 
 @dispmath{\omega={u\over XP} \quad\quad u = 0, 1, 2, ..., X-1}
 
@@ -14858,39 +11002,23 @@ domain is:
 @dispmath{F_u=\displaystyle\sum_{x=0}^{X-1} f_xe^{-i{ux\over X}} }
 
 @noindent
-Therefore, we see that for each freq-pixel in the frequency domain, we
-are going to need all the pixels in the spatial domain@footnote{So
-even if one pixel is a blank pixel (see @ref{Blank pixels}), all the
-pixels in the frequency domain will also be blank.}. If the input
-(spatial) pixel row is also @mymath{X} pixels wide, then we can
-exactly recover the @mymath{x}th pixel with the following summation:
+Therefore, we see that for each freq-pixel in the frequency domain, we are 
going to need all the pixels in the spatial domain@footnote{So even if one 
pixel is a blank pixel (see @ref{Blank pixels}), all the pixels in the 
frequency domain will also be blank.}.
+If the input (spatial) pixel row is also @mymath{X} pixels wide, then we can 
exactly recover the @mymath{x}th pixel with the following summation:
 
 @dispmath{f_x={1\over X}\displaystyle\sum_{u=0}^{X-1} F_ue^{i{ux\over X}} }
 
-When the input pixel row (we are still only working on 1D data) has
-@mymath{X} pixels, then it is @mymath{L=XP} spatial units
-wide. @mymath{L}, or the length of the input data was defined in
-@ref{Fourier series} and @mymath{P} or the space between the pixels in
-the input was defined in @ref{Dirac delta and comb}. As we saw in
-@ref{Sampling theorem}, the input (spatial) pixel spacing (@mymath{P})
-specifies the range of frequencies that can be studied and in
-@ref{Fourier series} we saw that the length of the (spatial) input,
-(@mymath{L}) determines the resolution (or size of the freq-pixels) in
-our discrete Fourier transformed image. Both result from the fact that
-the frequency domain is the inverse of the spatial domain.
+When the input pixel row (we are still only working on 1D data) has @mymath{X} 
pixels, then it is @mymath{L=XP} spatial units wide.
+@mymath{L}, or the length of the input data was defined in @ref{Fourier 
series} and @mymath{P} or the space between the pixels in the input was defined 
in @ref{Dirac delta and comb}.
+As we saw in @ref{Sampling theorem}, the input (spatial) pixel spacing 
(@mymath{P}) specifies the range of frequencies that can be studied and in 
@ref{Fourier series} we saw that the length of the (spatial) input, 
(@mymath{L}) determines the resolution (or size of the freq-pixels) in our 
discrete Fourier transformed image.
+Both result from the fact that the frequency domain is the inverse of the 
spatial domain.
 
 @node Fourier operations in two dimensions, Edges in the frequency domain, 
Discrete Fourier transform, Frequency domain and Fourier operations
 @subsubsection Fourier operations in two dimensions
 
-Once all the relations in the previous sections have been clearly
-understood in one dimension, it is very easy to generalize them to two
-or even more dimensions since each dimension is by definition
-independent. Previously we defined @mymath{l} as the continuous
-variable in 1D and the inverse of the period in its direction to be
-@mymath{\omega}. Let's show the second spatial direction with
-@mymath{m} the the inverse of the period in the second dimension with
-@mymath{\nu}. The Fourier transform in 2D (see @ref{Fourier
-transform}) can be written as:
+Once all the relations in the previous sections have been clearly understood 
in one dimension, it is very easy to generalize them to two or even more 
dimensions since each dimension is by definition independent.
+Previously we defined @mymath{l} as the continuous variable in 1D and the 
inverse of the period in its direction to be @mymath{\omega}.
+Let's show the second spatial direction with @mymath{m} the inverse of the 
period in the second dimension with @mymath{\nu}.
+The Fourier transform in 2D (see @ref{Fourier transform}) can be written as:
 
 @dispmath{F(\omega, \nu)=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}
 f(l, m)e^{-i({\omega}l+{\nu}m)}dl}
@@ -14898,36 +11026,25 @@ f(l, m)e^{-i({\omega}l+{\nu}m)}dl}
 @dispmath{f(l, m)=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}
 F(\omega, \nu)e^{i({\omega}l+{\nu}m)}dl}
 
-The 2D Dirac @mymath{\delta(l,m)} is non-zero only when
-@mymath{l=m=0}.  The 2D Dirac comb (or Dirac brush! See @ref{Dirac
-delta and comb}) can be written in units of the 2D Dirac
-@mymath{\delta}. For most image detectors, the sides of a pixel are
-equal in both dimensions. So @mymath{P} remains unchanged, if a
-specific device is used which has non-square pixels, then for each
-dimension a different value should be used.
+The 2D Dirac @mymath{\delta(l,m)} is non-zero only when @mymath{l=m=0}.
+The 2D Dirac comb (or Dirac brush! See @ref{Dirac delta and comb}) can be 
written in units of the 2D Dirac @mymath{\delta}.
+For most image detectors, the sides of a pixel are equal in both dimensions.
+So @mymath{P} remains unchanged, if a specific device is used which has 
non-square pixels, then for each dimension a different value should be used.
 
 @dispmath{{\rm III}_P(l, m)\equiv\displaystyle\sum_{j=-\infty}^{\infty}
 \displaystyle\sum_{k=-\infty}^{\infty}
 \delta(l-jP, m-kP) }
 
-The Two dimensional Sampling theorem (see @ref{Sampling theorem}) is thus
-very easily derived as before since the frequencies in each dimension are
-independent. Let's take @mymath{\nu_m} as the maximum frequency along the
-second dimension. Therefore the two dimensional sampling theorem says that
-a 2D band-limited function can be recovered when the following conditions
-hold@footnote{If the pixels are not a square, then each dimension has to
-use the respective pixel size, but since most detectors have square pixels,
-we assume so here too}:
+The Two dimensional Sampling theorem (see @ref{Sampling theorem}) is thus very 
easily derived as before since the frequencies in each dimension are 
independent.
+Let's take @mymath{\nu_m} as the maximum frequency along the second dimension.
+Therefore the two dimensional sampling theorem says that a 2D band-limited 
function can be recovered when the following conditions hold@footnote{If the 
pixels are not a square, then each dimension has to use the respective pixel 
size, but since most detectors have square pixels, we assume so here too}:
 
 @dispmath{ {2\pi\over P} > 2\omega_m \quad\quad\quad {\rm and}
 \quad\quad\quad {2\pi\over P} > 2\nu_m}
 
-Finally, let's represent the pixel counter on the second dimension in
-the spatial and frequency domains with @mymath{y} and @mymath{v}
-respectively. Also let's assume that the input image has @mymath{Y}
-pixels on the second dimension. Then the two dimensional discrete
-Fourier transform and its inverse (see @ref{Discrete Fourier
-transform}) can be written as:
+Finally, let's represent the pixel counter on the second dimension in the 
spatial and frequency domains with @mymath{y} and @mymath{v} respectively.
+Also let's assume that the input image has @mymath{Y} pixels on the second 
dimension.
+Then the two dimensional discrete Fourier transform and its inverse (see 
@ref{Discrete Fourier transform}) can be written as:
 
 @dispmath{F_{u,v}=\displaystyle\sum_{x=0}^{X-1}\displaystyle\sum_{y=0}^{Y-1}
 f_{x,y}e^{-i({ux\over X}+{vy\over Y})} }
@@ -14939,83 +11056,50 @@ F_{u,v}e^{i({ux\over X}+{vy\over Y})} }
 @node Edges in the frequency domain,  , Fourier operations in two dimensions, 
Frequency domain and Fourier operations
 @subsubsection Edges in the frequency domain
 
-With a good grasp of the frequency domain, we can revisit the problem
-of convolution on the image edges, see @ref{Edges in the spatial
-domain}.  When we apply the convolution theorem (see @ref{Convolution
-theorem}) to convolve an image, we first take the discrete Fourier
-transforms (DFT, @ref{Discrete Fourier transform}) of both the input
-image and the kernel, then we multiply them with each other and then
-take the inverse DFT to construct the convolved image. Of course, in
-order to multiply them with each other in the frequency domain, the
-two images have to be the same size, so let's assume that we pad the
-kernel (it is usually smaller than the input image) with zero valued
-pixels in both dimensions so it becomes the same size as the input
-image before the DFT.
-
-Having multiplied the two DFTs, we now apply the inverse DFT which is
-where the problem is usually created. If the DFT of the kernel only
-had values of 1 (unrealistic condition!) then there would be no
-problem and the inverse DFT of the multiplication would be identical
-with the input. However in real situations, the kernel's DFT has a
-maximum of 1 (because the sum of the kernel has to be one, see
-@ref{Convolution process}) and decreases something like the
-hypothetical profile of @ref{samplingfreq}. So when multiplied with
-the input image's DFT, the coefficients or magnitudes (see
-@ref{Circles and the complex plane}) of the smallest frequency (or the
-sum of the input image pixels) remains unchanged, while the magnitudes
-of the higher frequencies are significantly reduced.
-
-As we saw in @ref{Sampling theorem}, the Fourier transform of a
-discrete input will be infinitely repeated. In the final inverse DFT
-step, the input is in the frequency domain (the multiplied DFT of the
-input image and the kernel DFT). So the result (our output convolved
-image) will be infinitely repeated in the spatial domain. In order to
-accurately reconstruct the input image, we need all the frequencies
-with the correct magnitudes. However, when the magnitudes of higher
-frequencies are decreased, longer periods (shorter frequencies) will
-dominate in the reconstructed pixel values. Therefore, when
-constructing a pixel on the edge of the image, the newly empowered
-longer periods will look beyond the input image edges and will find
-the repeated input image there. So if you convolve an image in this
-fashion using the convolution theorem, when a bright object exists on
-one edge of the image, its blurred wings will be present on the other
-side of the convolved image. This is often termed as circular
-convolution or cyclic convolution.
-
-So, as long as we are dealing with convolution in the frequency domain,
-there is nothing we can do about the image edges. The least we can do
-is to eliminate the ghosts of the other side of the image. So, we add
-zero valued pixels to both the input image and the kernel in both
-dimensions so the image that will be convolved has a size equal to
-the sum of both images in each dimension. Of course, the effect of this
-zero-padding is that the sides of the output convolved image will
-become dark. To put it another way, the edges are going to drain the
-flux from nearby objects. But at least it is consistent across all the
-edges of the image and is predictable. In Convolve, you can see the
-padded images when inspecting the frequency domain convolution steps
-with the @option{--viewfreqsteps} option.
+With a good grasp of the frequency domain, we can revisit the problem of 
convolution on the image edges, see @ref{Edges in the spatial domain}.
+When we apply the convolution theorem (see @ref{Convolution theorem}) to 
convolve an image, we first take the discrete Fourier transforms (DFT, 
@ref{Discrete Fourier transform}) of both the input image and the kernel, then 
we multiply them with each other and then take the inverse DFT to construct the 
convolved image.
+Of course, in order to multiply them with each other in the frequency domain, 
the two images have to be the same size, so let's assume that we pad the kernel 
(it is usually smaller than the input image) with zero valued pixels in both 
dimensions so it becomes the same size as the input image before the DFT.
+
+Having multiplied the two DFTs, we now apply the inverse DFT which is where 
the problem is usually created.
+If the DFT of the kernel only had values of 1 (unrealistic condition!) then 
there would be no problem and the inverse DFT of the multiplication would be 
identical with the input.
+However in real situations, the kernel's DFT has a maximum of 1 (because the 
sum of the kernel has to be one, see @ref{Convolution process}) and decreases 
something like the hypothetical profile of @ref{samplingfreq}.
+So when multiplied with the input image's DFT, the coefficients or magnitudes 
(see @ref{Circles and the complex plane}) of the smallest frequency (or the sum 
of the input image pixels) remains unchanged, while the magnitudes of the 
higher frequencies are significantly reduced.
+
+As we saw in @ref{Sampling theorem}, the Fourier transform of a discrete input 
will be infinitely repeated.
+In the final inverse DFT step, the input is in the frequency domain (the 
multiplied DFT of the input image and the kernel DFT).
+So the result (our output convolved image) will be infinitely repeated in the 
spatial domain.
+In order to accurately reconstruct the input image, we need all the 
frequencies with the correct magnitudes.
+However, when the magnitudes of higher frequencies are decreased, longer 
periods (shorter frequencies) will dominate in the reconstructed pixel values.
+Therefore, when constructing a pixel on the edge of the image, the newly 
empowered longer periods will look beyond the input image edges and will find 
the repeated input image there.
+So if you convolve an image in this fashion using the convolution theorem, 
when a bright object exists on one edge of the image, its blurred wings will be 
present on the other side of the convolved image.
+This is often termed as circular convolution or cyclic convolution.
+
+So, as long as we are dealing with convolution in the frequency domain, there 
is nothing we can do about the image edges.
+The least we can do is to eliminate the ghosts of the other side of the image.
+So, we add zero valued pixels to both the input image and the kernel in both 
dimensions so the image that will be convolved has a size equal to the sum of 
both images in each dimension.
+Of course, the effect of this zero-padding is that the sides of the output 
convolved image will become dark.
+To put it another way, the edges are going to drain the flux from nearby 
objects.
+But at least it is consistent across all the edges of the image and is 
predictable.
+In Convolve, you can see the padded images when inspecting the frequency 
domain convolution steps with the @option{--viewfreqsteps} option.
 
 
 @node Spatial vs. Frequency domain, Convolution kernel, Frequency domain and 
Fourier operations, Convolve
 @subsection Spatial vs. Frequency domain
 
-With the discussions above it might not be clear when to choose the
-spatial domain and when to choose the frequency domain. Here we will
-try to list the benefits of each.
+With the discussions above it might not be clear when to choose the spatial 
domain and when to choose the frequency domain.
+Here we will try to list the benefits of each.
 
 @noindent
 The spatial domain,
 @itemize
 @item
-Can correct for the edge effects of convolution, see @ref{Edges in the
-spatial domain}.
+Can correct for the edge effects of convolution, see @ref{Edges in the spatial 
domain}.
 
 @item
 Can operate on blank pixels.
 
 @item
-Can be faster than frequency domain when the kernel is small (in terms
-of the number of pixels on the sides).
+Can be faster than frequency domain when the kernel is small (in terms of the 
number of pixels on the sides).
 @end itemize
 
 @noindent
@@ -15026,61 +11110,42 @@ Will be much faster when the image and kernel are 
both large.
 @end itemize
 
 @noindent
-As a general rule of thumb, when working on an image of modeled
-profiles use the frequency domain and when working on an image of real
-(observed) objects use the spatial domain (corrected for the
-edges). The reason is that if you apply a frequency domain convolution
-to a real image, you are going to loose information on the edges and
-generally you don't want large kernels. But when you have made the
-profiles in the image yourself, you can just make a larger input
-image and crop the central parts to completely remove the edge effect,
-see @ref{If convolving afterwards}. Also due to oversampling, both the
-kernels and the images can become very large and the speed boost of
-frequency domain convolution will significantly improve the processing
-time, see @ref{Oversampling}.
+As a general rule of thumb, when working on an image of modeled profiles use 
the frequency domain and when working on an image of real (observed) objects 
use the spatial domain (corrected for the edges).
+The reason is that if you apply a frequency domain convolution to a real 
image, you are going to loose information on the edges and generally you don't 
want large kernels.
+But when you have made the profiles in the image yourself, you can just make a 
larger input image and crop the central parts to completely remove the edge 
effect, see @ref{If convolving afterwards}.
+Also due to oversampling, both the kernels and the images can become very 
large and the speed boost of frequency domain convolution will significantly 
improve the processing time, see @ref{Oversampling}.
 
 @node Convolution kernel, Invoking astconvolve, Spatial vs. Frequency domain, 
Convolve
 @subsection Convolution kernel
 
-All the programs that need convolution will need to be given a
-convolution kernel file and extension. In most cases (other than
-Convolve, see @ref{Convolve}) the kernel file name is
-optional. However, the extension is necessary and must be specified
-either on the command-line or at least one of the configuration files
-(see @ref{Configuration files}). Within Gnuastro, there are two ways
-to create a kernel image:
+All the programs that need convolution will need to be given a convolution 
kernel file and extension.
+In most cases (other than Convolve, see @ref{Convolve}) the kernel file name 
is optional.
+However, the extension is necessary and must be specified either on the 
command-line or at least one of the configuration files (see @ref{Configuration 
files}).
+Within Gnuastro, there are two ways to create a kernel image:
 
 @itemize
 
 @item
-MakeProfiles: You can use MakeProfiles to create a parametric (based
-on a radial function) kernel, see @ref{MakeProfiles}. By default
-MakeProfiles will make the Gaussian and Moffat profiles in a separate
-file so you can feed it into any of the programs.
+MakeProfiles: You can use MakeProfiles to create a parametric (based on a 
radial function) kernel, see @ref{MakeProfiles}.
+By default MakeProfiles will make the Gaussian and Moffat profiles in a 
separate file so you can feed it into any of the programs.
 
 @item
-ConvertType: You can write your own desired kernel into a text file
-table and convert it to a FITS file with ConvertType, see
-@ref{ConvertType}. Just be careful that the kernel has to have an odd
-number of pixels along its two axes, see @ref{Convolution
-process}. All the programs that do convolution will normalize the
-kernel internally, so if you choose this option, you don't have to
-worry about normalizing the kernel. Only within Convolve, there is an
-option to disable normalization, see @ref{Invoking astconvolve}.
+ConvertType: You can write your own desired kernel into a text file table and 
convert it to a FITS file with ConvertType, see @ref{ConvertType}.
+Just be careful that the kernel has to have an odd number of pixels along its 
two axes, see @ref{Convolution process}.
+All the programs that do convolution will normalize the kernel internally, so 
if you choose this option, you don't have to worry about normalizing the kernel.
+Only within Convolve, there is an option to disable normalization, see 
@ref{Invoking astconvolve}.
 
 @end itemize
 
 @noindent
-The two options to specify a kernel file name and its extension are
-shown below. These are common between all the programs that will do
-convolution.
+The two options to specify a kernel file name and its extension are shown 
below.
+These are common between all the programs that will do convolution.
 @table @option
 @item -k STR
 @itemx --kernel=STR
-The convolution kernel file name. The @code{BITPIX} (data type) value of
-this file can be any standard type and it does not necessarily have to be
-normalized. Several operations will be done on the kernel image prior to
-the program's processing:
+The convolution kernel file name.
+The @code{BITPIX} (data type) value of this file can be any standard type and 
it does not necessarily have to be normalized.
+Several operations will be done on the kernel image prior to the program's 
processing:
 
 @itemize
 
@@ -15094,19 +11159,16 @@ All blank pixels (see @ref{Blank pixels}) will be set 
to zero.
 It will be normalized so the sum of its pixels equal unity.
 
 @item
-It will be flipped so the convolved image has the same
-orientation. This is only relevant if the kernel is not circular. See
-@ref{Convolution process}.
+It will be flipped so the convolved image has the same orientation.
+This is only relevant if the kernel is not circular. See @ref{Convolution 
process}.
 @end itemize
 
 @item -U STR
 @itemx --khdu=STR
-The convolution kernel HDU. Although the kernel file name is optional,
-before running any of the programs, they need to have a value for
-@option{--khdu} even if the default kernel is to be used. So be sure to
-keep its value in at least one of the configuration files (see
-@ref{Configuration files}). By default, the system configuration file has a
-value.
+The convolution kernel HDU.
+Although the kernel file name is optional, before running any of the programs, 
they need to have a value for @option{--khdu} even if the default kernel is to 
be used.
+So be sure to keep its value in at least one of the configuration files (see 
@ref{Configuration files}).
+By default, the system configuration file has a value.
 
 @end table
 
@@ -15114,9 +11176,8 @@ value.
 @node Invoking astconvolve,  , Convolution kernel, Convolve
 @subsection Invoking Convolve
 
-Convolve an input dataset (2D image or 1D spectrum for example) with a
-known kernel, or make the kernel necessary to match two PSFs. The general
-template for Convolve is:
+Convolve an input dataset (2D image or 1D spectrum for example) with a known 
kernel, or make the kernel necessary to match two PSFs.
+The general template for Convolve is:
 
 @example
 $ astconvolve [OPTION...] ASTRdata
@@ -15143,64 +11204,50 @@ $ astconvolve --kernel=sharperimage.fits 
--makekernel=10           \
 $ echo "1 3 10 3 1" | sed 's/ /\n/g' | astconvolve spectra.fits -c14
 @end example
 
-The only argument accepted by Convolve is an input image file. Some of the
-options are the same between Convolve and some other Gnuastro
-programs. Therefore, to avoid repetition, they will not be repeated
-here. For the full list of options shared by all Gnuastro programs, please
-see @ref{Common options}. In particular, in the spatial domain, on a
-multi-dimensional datasets, convolve uses Gnuastro's tessellation to speed
-up the run, see @ref{Tessellation}. Common options related to tessellation
-are described in in @ref{Processing options}.
+The only argument accepted by Convolve is an input image file.
+Some of the options are the same between Convolve and some other Gnuastro 
programs.
+Therefore, to avoid repetition, they will not be repeated here.
+For the full list of options shared by all Gnuastro programs, please see 
@ref{Common options}.
+In particular, in the spatial domain, on a multi-dimensional datasets, 
convolve uses Gnuastro's tessellation to speed up the run, see 
@ref{Tessellation}.
+Common options related to tessellation are described in in @ref{Processing 
options}.
 
-1-dimensional datasets (for example spectra) are only read as columns
-within a table (see @ref{Tables} for more on how Gnuastro programs read
-tables). Note that currently 1D convolution is only implemented in the
-spatial domain and thus kernel-matching is also not supported.
+1-dimensional datasets (for example spectra) are only read as columns within a 
table (see @ref{Tables} for more on how Gnuastro programs read tables).
+Note that currently 1D convolution is only implemented in the spatial domain 
and thus kernel-matching is also not supported.
 
-Here we will only explain the options particular to Convolve. Run Convolve
-with @option{--help} in order to see the full list of options Convolve
-accepts, irrespective of where they are explained in this book.
+Here we will only explain the options particular to Convolve.
+Run Convolve with @option{--help} in order to see the full list of options 
Convolve accepts, irrespective of where they are explained in this book.
 
 @table @option
 
 @item --kernelcolumn
-Column containing the 1D kernel. When the input dataset is a 1-dimensional
-column, and the host table has more than one column, use this option to
-specify which column should be used.
+Column containing the 1D kernel.
+When the input dataset is a 1-dimensional column, and the host table has more 
than one column, use this option to specify which column should be used.
 
 @item --nokernelflip
-Do not flip the kernel after reading it the spatial domain
-convolution. This can be useful if the flipping has already been
-applied to the kernel.
+Do not flip the kernel after reading it the spatial domain convolution.
+This can be useful if the flipping has already been applied to the kernel.
 
-@item --nokernelnorm
-Do not normalize the kernel after reading it, such that the sum of its
-pixels is unity.
+@item --nokernelnormx
+Do not normalize the kernel after reading it, such that the sum of its pixels 
is unity.
 
 @item -d STR
 @itemx --domain=STR
 @cindex Discrete Fourier transform
-The domain to use for the convolution. The acceptable values are
-`@code{spatial}' and `@code{frequency}', corresponding to the respective
-domain.
+The domain to use for the convolution.
+The acceptable values are `@code{spatial}' and `@code{frequency}', 
corresponding to the respective domain.
 
-For large images, the frequency domain process will be more efficient than
-convolving in the spatial domain. However, the edges of the image will
-loose some flux (see @ref{Edges in the spatial domain}) and the image must
-not contain any blank pixels, see @ref{Spatial vs. Frequency domain}.
+For large images, the frequency domain process will be more efficient than 
convolving in the spatial domain.
+However, the edges of the image will loose some flux (see @ref{Edges in the 
spatial domain}) and the image must not contain any blank pixels, see 
@ref{Spatial vs. Frequency domain}.
 
 
 @item --checkfreqsteps
-With this option a file with the initial name of the output file will
-be created that is suffixed with @file{_freqsteps.fits}, all the steps
-done to arrive at the final convolved image are saved as extensions in
-this file. The extensions in order are:
+With this option a file with the initial name of the output file will be 
created that is suffixed with @file{_freqsteps.fits}, all the steps done to 
arrive at the final convolved image are saved as extensions in this file.
+The extensions in order are:
 
 @enumerate
 @item
-The padded input image. In frequency domain convolution the two images
-(input and convolved) have to be the same size and both should be
-padded by zeros.
+The padded input image.
+In frequency domain convolution the two images (input and convolved) have to 
be the same size and both should be padded by zeros.
 
 @item
 The padded kernel, similar to the above.
@@ -15211,86 +11258,61 @@ The padded kernel, similar to the above.
 @cindex Numbers, complex
 @cindex Fourier spectrum
 @cindex Spectrum, Fourier
-The Fourier spectrum of the forward Fourier transform of the input
-image. Note that the Fourier transform is a complex operation (and not
-view able in one image!)  So we either have to show the `Fourier
-spectrum' or the `Phase angle'. For the complex number
-@mymath{a+ib}, the Fourier spectrum is defined as
-@mymath{\sqrt{a^2+b^2}} while the phase angle is defined as
-@mymath{\arctan(b/a)}.
+The Fourier spectrum of the forward Fourier transform of the input image.
+Note that the Fourier transform is a complex operation (and not view able in 
one image!)  So we either have to show the `Fourier spectrum' or the `Phase 
angle'.
+For the complex number @mymath{a+ib}, the Fourier spectrum is defined as 
@mymath{\sqrt{a^2+b^2}} while the phase angle is defined as 
@mymath{\arctan(b/a)}.
 
 @item
-The Fourier spectrum of the forward Fourier transform of the kernel
-image.
+The Fourier spectrum of the forward Fourier transform of the kernel image.
 
 @item
-The Fourier spectrum of the multiplied (through complex arithmetic)
-transformed images.
+The Fourier spectrum of the multiplied (through complex arithmetic) 
transformed images.
 
 @item
 @cindex Round-off error
 @cindex Floating point round-off error
 @cindex Error, floating point round-off
-The inverse Fourier transform of the multiplied image. If you open it,
-you will see that the convolved image is now in the center, not on one
-side of the image as it started with (in the padded image of the first
-extension). If you are working on a mock image which originally had
-pixels of precisely 0.0, you will notice that in those parts that your
-convolved profile(s) did not convert, the values are now
-@mymath{\sim10^{-18}}, this is due to floating-point round off
-errors. Therefore in the final step (when cropping the central parts
-of the image), we also remove any pixel with a value less than
-@mymath{10^{-17}}.
+The inverse Fourier transform of the multiplied image.
+If you open it, you will see that the convolved image is now in the center, 
not on one side of the image as it started with (in the padded image of the 
first extension).
+If you are working on a mock image which originally had pixels of precisely 
0.0, you will notice that in those parts that your convolved profile(s) did not 
convert, the values are now @mymath{\sim10^{-18}}, this is due to 
floating-point round off errors.
+Therefore in the final step (when cropping the central parts of the image), we 
also remove any pixel with a value less than @mymath{10^{-17}}.
 @end enumerate
 
 @item --noedgecorrection
-Do not correct the edge effect in spatial domain convolution. For a full
-discussion, please see @ref{Edges in the spatial domain}.
+Do not correct the edge effect in spatial domain convolution.
+For a full discussion, please see @ref{Edges in the spatial domain}.
 
 @item -m INT
 @itemx --makekernel=INT
-(@option{=INT}) If this option is called, Convolve will do de-convolution
-(see @ref{Convolution theorem}). The image specified by the
-@option{--kernel} option is assumed to be the sharper (less blurry) image
-and the input image is assumed to be the more blurry image. The value given
-to this option will be used as the maximum radius of the kernel. Any pixel
-in the final kernel that is larger than this distance from the center will
-be set to zero. The two images must have the same size.
-
-Noise has large frequencies which can make the result less reliable for the
-higher frequencies of the final result. So all the frequencies which have a
-spectrum smaller than the value given to the @option{minsharpspec} option
-in the sharper input image are set to zero and not divided. This will cause
-the wings of the final kernel to be flatter than they would ideally be
-which will make the convolved image result unreliable if it is too
-high. Some notes to take into account for a good result:
-@itemize
+(@option{=INT}) If this option is called, Convolve will do de-convolution (see 
@ref{Convolution theorem}).
+The image specified by the @option{--kernel} option is assumed to be the 
sharper (less blurry) image and the input image is assumed to be the more 
blurry image.
+The value given to this option will be used as the maximum radius of the 
kernel.
+Any pixel in the final kernel that is larger than this distance from the 
center will be set to zero.
+The two images must have the same size.
 
+Noise has large frequencies which can make the result less reliable for the 
higher frequencies of the final result.
+So all the frequencies which have a spectrum smaller than the value given to 
the @option{minsharpspec} option in the sharper input image are set to zero and 
not divided.
+This will cause the wings of the final kernel to be flatter than they would 
ideally be which will make the convolved image result unreliable if it is too 
high.
+Some notes to take into account for a good result:
+
+@itemize
 @item
-Choose a bright (unsaturated) star and use a region box (with Crop for
-example, see @ref{Crop}) that is sufficiently above the noise.
+Choose a bright (unsaturated) star and use a region box (with Crop for 
example, see @ref{Crop}) that is sufficiently above the noise.
 
 @item
-Use Warp (see @ref{Warp}) to warp the pixel grid so the star's center is
-exactly on the center of the central pixel in the cropped image. This will
-certainly slightly degrade the result, however, it is necessary. If there
-are multiple good stars, you can shift all of them, then normalize them (so
-the sum of each star's pixels is one) and then take their average to
-decrease this effect.
+Use Warp (see @ref{Warp}) to warp the pixel grid so the star's center is 
exactly on the center of the central pixel in the cropped image.
+This will certainly slightly degrade the result, however, it is necessary.
+If there are multiple good stars, you can shift all of them, then normalize 
them (so the sum of each star's pixels is one) and then take their average to 
decrease this effect.
 
 @item
-The shifting might move the center of the star by one pixel in any
-direction, so crop the central pixel of the warped image to have a
-clean image for the de-convolution.
+The shifting might move the center of the star by one pixel in any direction, 
so crop the central pixel of the warped image to have a clean image for the 
de-convolution.
 @end itemize
 
 Note that this feature is not yet supported in 1-dimensional datasets.
 
 @item -c
 @itemx --minsharpspec
-(@option{=FLT}) The minimum frequency spectrum (or coefficient, or pixel
-value in the frequency domain image) to use in deconvolution, see the
-explanations under the @option{--makekernel} option for more information.
+(@option{=FLT}) The minimum frequency spectrum (or coefficient, or pixel value 
in the frequency domain image) to use in deconvolution, see the explanations 
under the @option{--makekernel} option for more information.
 @end table
 
 
@@ -15305,69 +11327,45 @@ explanations under the @option{--makekernel} option 
for more information.
 
 @node Warp,  , Convolve, Data manipulation
 @section Warp
-Image warping is the process of mapping the pixels of one image onto a new
-pixel grid. This process is sometimes known as transformation, however
-following the discussion of Heckbert 1989@footnote{Paul
-S. Heckbert. 1989. @emph{Fundamentals of Texture mapping and Image
-Warping}, Master's thesis at University of California, Berkeley.} we will
-not be using that term because it can be confused with only pixel value or
-flux transformations. Here we specifically mean the pixel grid
-transformation which is better conveyed with `warp'.
+Image warping is the process of mapping the pixels of one image onto a new 
pixel grid.
+This process is sometimes known as transformation, however following the 
discussion of Heckbert 1989@footnote{Paul S. Heckbert. 1989. @emph{Fundamentals 
of Texture mapping and Image Warping}, Master's thesis at University of 
California, Berkeley.} we will not be using that term because it can be 
confused with only pixel value or flux transformations.
+Here we specifically mean the pixel grid transformation which is better 
conveyed with `warp'.
 
 @cindex Gravitational lensing
-Image wrapping is a very important step in astronomy, both in observational
-data analysis and in simulating modeled images. In modeling, warping an
-image is necessary when we want to apply grid transformations to the
-initial models, for example in simulating gravitational lensing (Radial
-warpings are not yet included in Warp). Observational reasons for warping
-an image are listed below:
+Image wrapping is a very important step in astronomy, both in observational 
data analysis and in simulating modeled images.
+In modeling, warping an image is necessary when we want to apply grid 
transformations to the initial models, for example in simulating gravitational 
lensing (Radial warpings are not yet included in Warp).
+Observational reasons for warping an image are listed below:
 
 @itemize
 
 @cindex Signal to noise ratio
 @item
-@strong{Noise:} Most scientifically interesting targets are inherently
-faint (have a very low Signal to noise ratio). Therefore one short
-exposure is not enough to detect such objects that are drowned deeply
-in the noise. We need multiple exposures so we can add them together
-and increase the objects' signal to noise ratio. Keeping the telescope
-fixed on one field of the sky is practically impossible. Therefore
-very deep observations have to put into the same grid before adding
-them.
+@strong{Noise:} Most scientifically interesting targets are inherently faint 
(have a very low Signal to noise ratio).
+Therefore one short exposure is not enough to detect such objects that are 
drowned deeply in the noise.
+We need multiple exposures so we can add them together and increase the 
objects' signal to noise ratio.
+Keeping the telescope fixed on one field of the sky is practically impossible.
+Therefore very deep observations have to put into the same grid before adding 
them.
 
 @cindex Mosaicing
 @cindex Image mosaic
 @item
-@strong{Resolution:} If we have multiple images of one patch of the
-sky (hopefully at multiple orientations) we can warp them to the same
-grid. The multiple orientations will allow us to `guess' the values of
-pixels on an output pixel grid that has smaller pixel sizes and thus
-increase the resolution of the output. This process of merging
-multiple observations is known as Mosaicing.
+@strong{Resolution:} If we have multiple images of one patch of the sky 
(hopefully at multiple orientations) we can warp them to the same grid.
+The multiple orientations will allow us to `guess' the values of pixels on an 
output pixel grid that has smaller pixel sizes and thus increase the resolution 
of the output.
+This process of merging multiple observations is known as Mosaicing.
 
 @cindex Cosmic rays
 @item
-@strong{Cosmic rays:} Cosmic rays can randomly fall on any part of an
-image. If they collide vertically with the camera, they are going to
-create a very sharp and bright spot that in most cases can be separated
-easily@footnote{All astronomical targets are blurred with the PSF, see
-@ref{PSF}, however a cosmic ray is not and so it is very sharp (it
-suddenly stops at one pixel).}. However, depending on the depth of the
-camera pixels, and the angle that a cosmic rays collides with it, it
-can cover a line-like larger area on the CCD which makes the detection
-using their sharp edges very hard and error prone. One of the best
-methods to remove cosmic rays is to compare multiple images of the
-same field. To do that, we need all the images to be on the same pixel
-grid.
+@strong{Cosmic rays:} Cosmic rays can randomly fall on any part of an image.
+If they collide vertically with the camera, they are going to create a very 
sharp and bright spot that in most cases can be separated easily@footnote{All 
astronomical targets are blurred with the PSF, see @ref{PSF}, however a cosmic 
ray is not and so it is very sharp (it suddenly stops at one pixel).}.
+However, depending on the depth of the camera pixels, and the angle that a 
cosmic rays collides with it, it can cover a line-like larger area on the CCD 
which makes the detection using their sharp edges very hard and error prone.
+One of the best methods to remove cosmic rays is to compare multiple images of 
the same field.
+To do that, we need all the images to be on the same pixel grid.
 
 @cindex Optical distortion
 @cindex Distortion, optical
 @item
-@strong{Optical distortion:} (Not yet included in Warp) In wide field
-images, the optical distortion that occurs on the outer parts of the focal
-plane will make accurate comparison of the objects at various locations
-impossible. It is therefore necessary to warp the image and correct for
-those distortions prior to the analysis.
+@strong{Optical distortion:} (Not yet included in Warp) In wide field images, 
the optical distortion that occurs on the outer parts of the focal plane will 
make accurate comparison of the objects at various locations impossible.
+It is therefore necessary to warp the image and correct for those distortions 
prior to the analysis.
 
 @cindex ACS
 @cindex CCD
@@ -15377,10 +11375,7 @@ those distortions prior to the analysis.
 @cindex Advanced camera for surveys
 @cindex Hubble Space Telescope (HST)
 @item
-@strong{Detector not on focal plane:} In some cases (like the Hubble
-Space Telescope ACS and WFC3 cameras), the CCD might be tilted
-compared to the focal plane, therefore the recorded CCD pixels have to
-be projected onto the focal plane before further analysis.
+@strong{Detector not on focal plane:} In some cases (like the Hubble Space 
Telescope ACS and WFC3 cameras), the CCD might be tilted compared to the focal 
plane, therefore the recorded CCD pixels have to be projected onto the focal 
plane before further analysis.
 
 @end itemize
 
@@ -15396,14 +11391,8 @@ be projected onto the focal plane before further 
analysis.
 
 @cindex Scaling
 @cindex Coordinate transformation
-Let's take @mymath{\left[\matrix{u&v}\right]} as the coordinates of
-a point in the input image and @mymath{\left[\matrix{x&y}\right]} as the
-coordinates of that same point in the output image@footnote{These can
-be any real number, we are not necessarily talking about integer
-pixels here.}. The simplest form of coordinate transformation (or
-warping) is the scaling of the coordinates, let's assume we want to
-scale the first axis by @mymath{M} and the second by @mymath{N}, the
-output coordinates of that point can be calculated by
+Let's take @mymath{\left[\matrix{u&v}\right]} as the coordinates of a point in 
the input image and @mymath{\left[\matrix{x&y}\right]} as the coordinates of 
that same point in the output image@footnote{These can be any real number, we 
are not necessarily talking about integer pixels here.}.
+The simplest form of coordinate transformation (or warping) is the scaling of 
the coordinates, let's assume we want to scale the first axis by @mymath{M} and 
the second by @mymath{N}, the output coordinates of that point can be 
calculated by
 
 @dispmath{\left[\matrix{x\cr y}\right]=
           \left[\matrix{Mu\cr Nv}\right]=
@@ -15413,12 +11402,10 @@ output coordinates of that point can be calculated by
 @cindex Multiplication, Matrix
 @cindex Rotation of coordinates
 @noindent
-Note that these are matrix multiplications. We thus see that we can
-represent any such grid warping as a matrix. Another thing we can do
-with this @mymath{2\times2} matrix is to rotate the output coordinate
-around the common center of both coordinates. If the output is rotated
-anticlockwise by @mymath{\theta} degrees from the positive (to the
-right) horizontal axis, then the warping matrix should become:
+Note that these are matrix multiplications.
+We thus see that we can represent any such grid warping as a matrix.
+Another thing we can do with this @mymath{2\times2} matrix is to rotate the 
output coordinate around the common center of both coordinates.
+If the output is rotated anticlockwise by @mymath{\theta} degrees from the 
positive (to the right) horizontal axis, then the warping matrix should become:
 
 @dispmath{\left[\matrix{x\cr y}\right]=
    \left[\matrix{ucos\theta-vsin\theta\cr usin\theta+vcos\theta}\right]=
@@ -15428,9 +11415,7 @@ right) horizontal axis, then the warping matrix should 
become:
 
 @cindex Flip coordinates
 @noindent
-We can also flip the coordinates around the first axis, the second
-axis and the coordinate center with the following three matrices
-respectively:
+We can also flip the coordinates around the first axis, the second axis and 
the coordinate center with the following three matrices respectively:
 
 @dispmath{\left[\matrix{1&0\cr0&-1}\right]\quad\quad
           \left[\matrix{-1&0\cr0&1}\right]\quad\quad
@@ -15438,59 +11423,41 @@ respectively:
 
 @cindex Shear
 @noindent
-The final thing we can do with this definition of a @mymath{2\times2}
-warping matrix is shear. If we want the output to be sheared along the
-first axis with @mymath{A} and along the second with @mymath{B}, then
-we can use the matrix:
+The final thing we can do with this definition of a @mymath{2\times2} warping 
matrix is shear.
+If we want the output to be sheared along the first axis with @mymath{A} and 
along the second with @mymath{B}, then we can use the matrix:
 
 @dispmath{\left[\matrix{1&A\cr B&1}\right]}
 
 @noindent
-To have one matrix representing any combination of these steps, you
-use matrix multiplication, see @ref{Merging multiple warpings}. So any
-combinations of these transformations can be displayed with one
-@mymath{2\times2} matrix:
+To have one matrix representing any combination of these steps, you use matrix 
multiplication, see @ref{Merging multiple warpings}.
+So any combinations of these transformations can be displayed with one 
@mymath{2\times2} matrix:
 
 @dispmath{\left[\matrix{a&b\cr c&d}\right]}
 
 @cindex Wide Field Camera 3
 @cindex Advanced Camera for Surveys
 @cindex Hubble Space Telescope (HST)
-The transformations above can cover a lot of the needs of most
-coordinate transformations. However they are limited to mapping the
-point @mymath{[\matrix{0&0}]} to @mymath{[\matrix{0&0}]}. Therefore
-they are useless if you want one coordinate to be shifted compared to
-the other one. They are also space invariant, meaning that all the
-coordinates in the image will receive the same transformation. In
-other words, all the pixels in the output image will have the same
-area if placed over the input image. So transformations which require
-varying output pixel sizes like projections cannot be applied through
-this @mymath{2\times2} matrix either (for example for the tilted ACS
-and WFC3 camera detectors on board the Hubble space telescope).
+The transformations above can cover a lot of the needs of most coordinate 
transformations.
+However they are limited to mapping the point @mymath{[\matrix{0&0}]} to 
@mymath{[\matrix{0&0}]}.
+Therefore they are useless if you want one coordinate to be shifted compared 
to the other one.
+They are also space invariant, meaning that all the coordinates in the image 
will receive the same transformation.
+In other words, all the pixels in the output image will have the same area if 
placed over the input image.
+So transformations which require varying output pixel sizes like projections 
cannot be applied through this @mymath{2\times2} matrix either (for example for 
the tilted ACS and WFC3 camera detectors on board the Hubble space telescope).
 
 @cindex M@"obius, August. F.
 @cindex Homogeneous coordinates
-@cindex Coordinates, homogeneous
-To add these further capabilities, namely translation and projection,
-we use the homogeneous coordinates. They were defined about 200 years
-ago by August Ferdinand M@"obius (1790 -- 1868). For simplicity, we
-will only discuss points on a 2D plane and avoid the complexities of
-higher dimensions. We cannot provide a deep mathematical introduction
-here, interested readers can get a more detailed explanation from
-Wikipedia@footnote{@url{http://en.wikipedia.org/wiki/Homogeneous_coordinates}}
-and the references therein.
-
-By adding an extra coordinate to a point we can add the flexibility we
-need. The point @mymath{[\matrix{x&y}]} can be represented as
-@mymath{[\matrix{xZ&yZ&Z}]} in homogeneous coordinates. Therefore
-multiplying all the coordinates of a point in the homogeneous
-coordinates with a constant will give the same point. Put another way,
-the point @mymath{[\matrix{x&y&Z}]} corresponds to the point
-@mymath{[\matrix{x/Z&y/Z}]} on the constant @mymath{Z} plane. Setting
-@mymath{Z=1}, we get the input image plane, so
-@mymath{[\matrix{u&v&1}]} corresponds to @mymath{[\matrix{u&v}]}. With
-this definition, the transformations above can be generally written
-as:
+@cindex Coordinates, homogeneou
+To add these further capabilities, namely translation and projection, we use 
the homogeneous coordinates.
+They were defined about 200 years ago by August Ferdinand M@"obius (1790 -- 
1868).
+For simplicity, we will only discuss points on a 2D plane and avoid the 
complexities of higher dimensions.
+We cannot provide a deep mathematical introduction here, interested readers 
can get a more detailed explanation from 
Wikipedia@footnote{@url{http://en.wikipedia.org/wiki/Homogeneous_coordinates}} 
and the references therein.
+
+By adding an extra coordinate to a point we can add the flexibility we need.
+The point @mymath{[\matrix{x&y}]} can be represented as 
@mymath{[\matrix{xZ&yZ&Z}]} in homogeneous coordinates.
+Therefore multiplying all the coordinates of a point in the homogeneous 
coordinates with a constant will give the same point.
+Put another way, the point @mymath{[\matrix{x&y&Z}]} corresponds to the point 
@mymath{[\matrix{x/Z&y/Z}]} on the constant @mymath{Z} plane.
+Setting @mymath{Z=1}, we get the input image plane, so 
@mymath{[\matrix{u&v&1}]} corresponds to @mymath{[\matrix{u&v}]}.
+With this definition, the transformations above can be generally written as:
 
 @dispmath{\left[\matrix{x\cr y\cr 1}\right]=
           \left[\matrix{a&b&0\cr c&d&0\cr 0&0&1}\right]
@@ -15499,24 +11466,17 @@ as:
 @noindent
 @cindex Affine Transformation
 @cindex Transformation, affine
-We thus acquired 4 extra degrees of freedom. By giving non-zero values
-to the zero valued elements of the last column we can have translation
-(try the matrix multiplication!). In general, any coordinate
-transformation that is represented by the matrix below is known as an
-affine
-transformation@footnote{@url{http://en.wikipedia.org/wiki/Affine_transformation}}:
+We thus acquired 4 extra degrees of freedom.
+By giving non-zero values to the zero valued elements of the last column we 
can have translation (try the matrix multiplication!).
+In general, any coordinate transformation that is represented by the matrix 
below is known as an affine 
transformation@footnote{@url{http://en.wikipedia.org/wiki/Affine_transformation}}:
 
 @dispmath{\left[\matrix{a&b&c\cr d&e&f\cr 0&0&1}\right]}
 
 @cindex Homography
 @cindex Projective transformation
 @cindex Transformation, projective
-We can now consider translation, but the affine transform is still
-spatially invariant. Giving non-zero values to the other two elements
-in the matrix above gives us the projective transformation or
-Homography@footnote{@url{http://en.wikipedia.org/wiki/Homography}}
-which is the most general type of transformation with the
-@mymath{3\times3} matrix:
+We can now consider translation, but the affine transform is still spatially 
invariant.
+Giving non-zero values to the other two elements in the matrix above gives us 
the projective transformation or 
Homography@footnote{@url{http://en.wikipedia.org/wiki/Homography}} which is the 
most general type of transformation with the @mymath{3\times3} matrix:
 
 @dispmath{\left[\matrix{x'\cr y'\cr w}\right]=
           \left[\matrix{a&b&c\cr d&e&f\cr g&h&1}\right]
@@ -15528,20 +11488,15 @@ So the output coordinates can be calculated from:
 @dispmath{x={x' \over w}={au+bv+c \over gu+hv+1}\quad\quad\quad\quad
           y={y' \over w}={du+ev+f \over gu+hv+1}}
 
-Thus with Homography we can change the sizes of the output pixels on
-the input plane, giving a `perspective'-like visual impression. This
-can be quantitatively seen in the two equations above. When
-@mymath{g=h=0}, the denominator is independent of @mymath{u} or
-@mymath{v} and thus we have spatial invariance. Homography preserves
-lines at all orientations. A very useful fact about Homography is that
-its inverse is also a Homography. These two properties play a very
-important role in the implementation of this transformation. A short
-but instructive and illustrated review of affine, projective and also
-bi-linear mappings is provided in Heckbert 1989@footnote{Paul
-S. Heckbert. 1989. @emph{Fundamentals of Texture mapping and Image
-Warping}, Master's thesis at University of California, Berkeley. Note
-that since points are defined as row vectors there, the matrix is the
-transpose of the one discussed here.}.
+Thus with Homography we can change the sizes of the output pixels on the input 
plane, giving a `perspective'-like visual impression.
+This can be quantitatively seen in the two equations above.
+When @mymath{g=h=0}, the denominator is independent of @mymath{u} or 
@mymath{v} and thus we have spatial invariance.
+Homography preserves lines at all orientations.
+A very useful fact about Homography is that its inverse is also a Homography.
+These two properties play a very important role in the implementation of this 
transformation.
+A short but instructive and illustrated review of affine, projective and also 
bi-linear mappings is provided in Heckbert 1989@footnote{
+Paul S. Heckbert. 1989. @emph{Fundamentals of Texture mapping and Image 
Warping}, Master's thesis at University of California, Berkeley.
+Note that since points are defined as row vectors there, the matrix is the 
transpose of the one discussed here.}.
 
 @node Merging multiple warpings, Resampling, Warping basics, Warp
 @subsection Merging multiple warpings
@@ -15551,24 +11506,17 @@ transpose of the one discussed here.}.
 @cindex Multiplication, matrix
 @cindex Non-commutative operations
 @cindex Operations, non-commutative
-In @ref{Warping basics} we saw how a basic warp/transformation can be
-represented with a matrix. To make more complex warpings (for example to
-define a translation, rotation and scale as one warp) the individual
-matrices have to be multiplied through matrix multiplication. However
-matrix multiplication is not commutative, so the order of the set of
-matrices you use for the multiplication is going to be very important.
-
-The first warping should be placed as the left-most matrix. The second
-warping to the right of that and so on. The second transformation is
-going to occur on the warped coordinates of the first.  As an example
-for merging a few transforms into one matrix, the multiplication below
-represents the rotation of an image about a point
-@mymath{[\matrix{U&V}]} anticlockwise from the horizontal axis by an
-angle of @mymath{\theta}. To do this, first we take the origin to
-@mymath{[\matrix{U&V}]} through translation. Then we rotate the image,
-then we translate it back to where it was initially. These three
-operations can be merged in one operation by calculating the matrix
-multiplication below:
+In @ref{Warping basics} we saw how a basic warp/transformation can be 
represented with a matrix.
+To make more complex warpings (for example to define a translation, rotation 
and scale as one warp) the individual matrices have to be multiplied through 
matrix multiplication.
+However matrix multiplication is not commutative, so the order of the set of 
matrices you use for the multiplication is going to be very important.
+
+The first warping should be placed as the left-most matrix.
+The second warping to the right of that and so on.
+The second transformation is going to occur on the warped coordinates of the 
first.
+As an example for merging a few transforms into one matrix, the multiplication 
below represents the rotation of an image about a point @mymath{[\matrix{U&V}]} 
anticlockwise from the horizontal axis by an angle of @mymath{\theta}.
+To do this, first we take the origin to @mymath{[\matrix{U&V}]} through 
translation.
+Then we rotate the image, then we translate it back to where it was initially.
+These three operations can be merged in one operation by calculating the 
matrix multiplication below:
 
 @dispmath{\left[\matrix{1&0&U\cr0&1&V\cr{}0&0&1}\right]
           \left[\matrix{cos\theta&-sin\theta&0\cr sin\theta&cos\theta&0\cr     
           0&0&1}\right]
@@ -15590,20 +11538,13 @@ multiplication below:
 @cindex Photoelectrons
 @cindex Picture element
 @cindex Mixing pixel values
-A digital image is composed of discrete `picture elements' or
-`pixels'. When a real image is created from a camera or detector, each
-pixel's area is used to store the number of photo-electrons that were
-created when incident photons collided with that pixel's surface area. This
-process is called the `sampling' of a continuous or analog data into
-digital data. When we change the pixel grid of an image or warp it as we
-defined in @ref{Warping basics}, we have to `guess' the flux value of each
-pixel on the new grid based on the old grid, or re-sample it. Because of
-the `guessing', any form of warping on the data is going to degrade the
-image and mix the original pixel values with each other. So if an analysis
-can be done on an unwarped data image, it is best to leave the image
-untouched and pursue the analysis. However as discussed in @ref{Warp} this
-is not possible most of the times, so we have to accept the problem and
-re-sample the image.
+A digital image is composed of discrete `picture elements' or `pixels'.
+When a real image is created from a camera or detector, each pixel's area is 
used to store the number of photo-electrons that were created when incident 
photons collided with that pixel's surface area.
+This process is called the `sampling' of a continuous or analog data into 
digital data.
+When we change the pixel grid of an image or warp it as we defined in 
@ref{Warping basics}, we have to `guess' the flux value of each pixel on the 
new grid based on the old grid, or re-sample it.
+Because of the `guessing', any form of warping on the data is going to degrade 
the image and mix the original pixel values with each other.
+So if an analysis can be done on an unwarped data image, it is best to leave 
the image untouched and pursue the analysis.
+However as discussed in @ref{Warp} this is not possible most of the times, so 
we have to accept the problem and re-sample the image.
 
 @cindex Point pixels
 @cindex Interpolation
@@ -15612,72 +11553,47 @@ re-sample the image.
 @cindex Bi-linear interpolation
 @cindex Interpolation, bicubic
 @cindex Interpolation, bi-linear
-In most applications of image processing, it is sufficient to consider
-each pixel to be a point and not an area. This assumption can
-significantly speed up the processing of an image and also the
-simplicity of the code. It is a fine assumption when the
-signal to noise ratio of the objects are very large. The question will
-then be one of interpolation because you have multiple points
-distributed over the output image and you want to find the values at
-the pixel centers. To increase the accuracy, you might also sample
-more than one point from within a pixel giving you more points for a
-more accurate interpolation in the output grid.
+In most applications of image processing, it is sufficient to consider each 
pixel to be a point and not an area.
+This assumption can significantly speed up the processing of an image and also 
the simplicity of the code.
+It is a fine assumption when the signal to noise ratio of the objects are very 
large.
+The question will then be one of interpolation because you have multiple 
points distributed over the output image and you want to find the values at the 
pixel centers.
+To increase the accuracy, you might also sample more than one point from 
within a pixel giving you more points for a more accurate interpolation in the 
output grid.
 
 @cindex Image edges
 @cindex Edges, image
-However, interpolation has several problems. The first one is that it will
-depend on the type of function you want to assume for the
-interpolation. For example you can choose a bi-linear or bi-cubic (the
-`bi's are for the 2 dimensional nature of the data) interpolation
-method. For the latter there are various ways to set the
-constants@footnote{see @url{http://entropymine.com/imageworsener/bicubic/}
-for a nice introduction.}. Such functional interpolation functions can fail
-seriously on the edges of an image. They will also need normalization so
-that the flux of the objects before and after the warpings are
-comparable. The most basic problem with such techniques is that they are
-based on a point while a detector pixel is an area. They add a level of
-subjectivity to the data (make more assumptions through the functions than
-the data can handle). For most applications this is fine, but in scientific
-applications where detection of the faintest possible galaxies or fainter
-parts of bright galaxies is our aim, we cannot afford this loss. Because of
-these reasons Warp will not use such interpolation techniques.
+However, interpolation has several problems.
+The first one is that it will depend on the type of function you want to 
assume for the interpolation.
+For example you can choose a bi-linear or bi-cubic (the `bi's are for the 2 
dimensional nature of the data) interpolation method.
+For the latter there are various ways to set the constants@footnote{see 
@url{http://entropymine.com/imageworsener/bicubic/} for a nice introduction.}.
+Such functional interpolation functions can fail seriously on the edges of an 
image.
+They will also need normalization so that the flux of the objects before and 
after the warpings are comparable.
+The most basic problem with such techniques is that they are based on a point 
while a detector pixel is an area.
+They add a level of subjectivity to the data (make more assumptions through 
the functions than the data can handle).
+For most applications this is fine, but in scientific applications where 
detection of the faintest possible galaxies or fainter parts of bright galaxies 
is our aim, we cannot afford this loss.
+Because of these reasons Warp will not use such interpolation techniques.
 
 @cindex Drizzle
 @cindex Pixel mixing
 @cindex Exact area resampling
-Warp will do interpolation based on ``pixel mixing''@footnote{For a graphic
-demonstration see @url{http://entropymine.com/imageworsener/pixelmixing/}.}
-or ``area resampling''. This is also what the Hubble Space Telescope
-pipeline calls
-``Drizzling''@footnote{@url{http://en.wikipedia.org/wiki/Drizzle_(image_processing)}}.
 This
-technique requires no functions, it is thus non-parametric. It is also the
-closest we can get (make least assumptions) to what actually happens on the
-detector pixels. The basic idea is that you reverse-transform each output
-pixel to find which pixels of the input image it covers and what fraction
-of the area of the input pixels are covered. To find the output pixel
-value, you simply sum the value of each input pixel weighted by the overlap
-fraction (between 0 to 1) of the output pixel and that input pixel. Through
-this process, pixels are treated as an area not as a point (which is how
-detectors create the image), also the brightness (see @ref{Flux Brightness
-and magnitude}) of an object will be left completely unchanged.
-
-If there are very high spatial-frequency signals in the image (for
-example fringes) which vary on a scale smaller than your output image
-pixel size, pixel mixing can cause
-ailiasing@footnote{@url{http://en.wikipedia.org/wiki/Aliasing}}. So if
-the input image has fringes, they have to be calculated and removed
-separately (which would naturally be done in any astronomical
-application). Because of the PSF no astronomical target has a sharp
-change in the signal so this issue is less important for astronomical
-applications, see @ref{PSF}.
+Warp will do interpolation based on ``pixel mixing''@footnote{For a graphic 
demonstration see @url{http://entropymine.com/imageworsener/pixelmixing/}.} or 
``area resampling''.
+This is also what the Hubble Space Telescope pipeline calls 
``Drizzling''@footnote{@url{http://en.wikipedia.org/wiki/Drizzle_(image_processing)}}.
+This technique requires no functions, it is thus non-parametric.
+It is also the closest we can get (make least assumptions) to what actually 
happens on the detector pixels.
+The basic idea is that you reverse-transform each output pixel to find which 
pixels of the input image it covers and what fraction of the area of the input 
pixels are covered.
+To find the output pixel value, you simply sum the value of each input pixel 
weighted by the overlapfraction (between 0 to 1) of the output pixel and that 
input pixel.
+Through this process, pixels are treated as an area not as a point (which is 
how detectors create the image), also the brightness (see @ref{Flux Brightness 
and magnitude}) of an object will be left completely unchanged.
+
+If there are very high spatial-frequency signals in the image (for example 
fringes) which vary on a scale smaller than your output image pixel size, pixel 
mixing can cause 
ailiasing@footnote{@url{http://en.wikipedia.org/wiki/Aliasing}}.
+So if the input image has fringes, they have to be calculated and removed 
separately (which would naturally be done in any astronomical  application).
+Because of the PSF no astronomical target has a sharpchange in the signal so 
this issue is less important for astronomical applications, see @ref{PSF}.
 
 
 @node Invoking astwarp,  , Resampling, Warp
 @subsection Invoking Warp
 
-Warp an input dataset into a new grid. Any homographic warp (for example
-scaling, rotation, translation, projection) is acceptable, see @ref{Warping
-basics} for the definitions. The general template for invoking Warp is:
+Warp an input dataset into a new grid.
+Any homographic warp (for example scaling, rotation, translation, projection) 
is acceptable, see @ref{Warping basics} for the definitions.
+The general template for invoking Warp is:
 
 @example
 $ astwarp [OPTIONS...] InputImage
@@ -15703,195 +11619,134 @@ $ astwarp --matrix=1/5,0,4/10,0,1/5,4/10,0,0,1 
image.fits
 $ astwarp --matrix="0.7071,-0.7071,  0.7071,0.7071" image.fits
 @end example
 
-If any processing is to be done, Warp can accept one file as input. As in
-all Gnuastro programs, when an output is not explicitly set with the
-@option{--output} option, the output filename will be set automatically
-based on the operation, see @ref{Automatic output}. For the full list of
-general options to all Gnuastro programs (including Warp), please see
-@ref{Common options}.
+If any processing is to be done, Warp can accept one file as input.
+As in all Gnuastro programs, when an output is not explicitly set with the 
@option{--output} option, the output filename will be set automatically based 
on the operation, see @ref{Automatic output}.
+For the full list of general options to all Gnuastro programs (including 
Warp), please see @ref{Common options}.
 
-To be the most accurate, the input image will be read as a 64-bit double
-precision floating point dataset and all internal processing is done in
-this format (including the raw output type). You can use the common
-@option{--type} option to write the output in any type you want, see
-@ref{Numeric data types}.
+To be the most accurate, the input image will be read as a 64-bit double 
precision floating point dataset and all internal processing is done in this 
format (including the raw output type).
+You can use the common @option{--type} option to write the output in any type 
you want, see @ref{Numeric data types}.
 
-Warps must be specified as command-line options, either as (possibly
-multiple) modular warpings (for example @option{--rotate}, or
-@option{--scale}), or directly as a single raw matrix (with
-@option{--matrix}). If specified together, the latter (direct matrix) will
-take precedence and all the modular warpings will be ignored. Any number of
-modular warpings can be specified on the command-line and configuration
-files. If more than one modular warping is given, all will be merged to
-create one warping matrix. As described in @ref{Merging multiple warpings},
-matrix multiplication is not commutative, so the order of specifying the
-modular warpings on the command-line, and/or configuration files makes a
-difference (see @ref{Configuration file precedence}). The full list of
-modular warpings and the other options particular to Warp are described
-below.
+Warps must be specified as command-line options, either as (possibly multiple) 
modular warpings (for example @option{--rotate}, or @option{--scale}), or 
directly as a single raw matrix (with @option{--matrix}).
+If specified together, the latter (direct matrix) will take precedence and all 
the modular warpings will be ignored.
+Any number of modular warpings can be specified on the command-line and 
configuration files.
+If more than one modular warping is given, all will be merged to create one 
warping matrix.
+As described in @ref{Merging multiple warpings}, matrix multiplication is not 
commutative, so the order of specifying the modular warpings on the 
command-line, and/or configuration files makes a difference (see 
@ref{Configuration file precedence}).
+The full list of modular warpings and the other options particular to Warp are 
described below.
 
-The values to the warping options (modular warpings as well as
-@option{--matrix}), are a sequence of at least one number. Each number in
-this sequence is separated from the next by a comma (@key{,}). Each number
-can also be written as a single fraction (with a forward-slash @key{/}
-between the numerator and denominator). Space and Tab characters are
-permitted between any two numbers, just don't forget to quote the whole
-value. Otherwise, the value will not be fully passed onto the option. See
-the examples above as a demonstration.
+The values to the warping options (modular warpings as well as 
@option{--matrix}), are a sequence of at least one number.
+Each number in this sequence is separated from the next by a comma (@key{,}).
+Each number can also be written as a single fraction (with a forward-slash 
@key{/} between the numerator and denominator).
+Space and Tab characters arepermitted between any two numbers, just don't 
forget to quote the whole value.
+Otherwise, the value will not be fully passed onto the option.
+See the examples above as a demonstration.
 
 @cindex FITS standard
-Based on the FITS standard, integer values are assigned to the center of a
-pixel and the coordinate [1.0, 1.0] is the center of the first pixel
-(bottom left of the image when viewed in SAO ds9). So the coordinate center
-[0.0, 0.0] is half a pixel away (in each axis) from the bottom left vertex
-of the first pixel. The resampling that is done in Warp (see
-@ref{Resampling}) is done on the coordinate axes and thus directly
-depends on the coordinate center. In some situations this if fine, for
-example when rotating/aligning a real image, all the edge pixels will be
-similarly affected. But in other situations (for example when scaling an
-over-sampled mock image to its intended resolution, this is not desired:
-you want the center of the coordinates to be on the corner of the pixel. In
-such cases, you can use the @option{--centeroncorner} option which will
-shift the center by @mymath{0.5} before the main warp, then shift it back
-by @mymath{-0.5} after the main warp, see below.
+Based on the FITS standard, integer values are assigned to the center of a 
pixel and the coordinate [1.0, 1.0] is the center of the first pixel (bottom 
left of the image when viewed in SAO ds9).
+So the coordinate center [0.0, 0.0] is half a pixel away (in each axis) from 
the bottom left vertex of the first pixel.
+The resampling that is done in Warp (see @ref{Resampling}) is done on the 
coordinate axes and thus directly depends on the coordinate center.
+In some situations this if fine, for example when rotating/aligning a real 
image, all the edge pixels will be similarly affected.
+But in other situations (for example when scaling an over-sampled mock image 
to its intended resolution, this is not desired: you want the center of the 
coordinates to be on the corner of the pixel.
+In such cases, you can use the @option{--centeroncorner} option which will 
shift the center by @mymath{0.5} before the main warp, then shift it back by 
@mymath{-0.5} after the main warp, see below.
 
 
 @table @option
 
 @item -a
 @itemx --align
-Align the image and celestial (WCS) axes given in the input. After it, the
-vertical image direction (when viewed in SAO ds9) corresponds to the
-declination and the horizontal axis is the inverse of the Right Ascension
-(RA). The inverse of the RA is chosen so the image can correspond to what
-you would actually see on the sky and is common in most survey
-images.
-
-Align is internally treated just like a rotation (@option{--rotation}), but
-uses the input image's WCS to find the rotation angle. Thus, if you have
-rotated the image before calling @option{--align}, you might get unexpected
-results (because the rotation is defined on the original WCS).
+Align the image and celestial (WCS) axes given in the input.
+After it, the  vertical image direction (when viewed in SAO ds9) corresponds 
to thedeclination and the horizontal axis is the inverse of the Right Ascension 
(RA).
+The inverse of the RA is chosen so the image can correspond to what you would 
actually see on the sky and is common in most survey images.
+
+Align is internally treated just like a rotation (@option{--rotation}), but 
uses the input image's WCS to find the rotation angle.
+Thus, if you have rotated the image before calling @option{--align}, you might 
get unexpected results (because the rotation is defined on the original WCS).
 
 @item -r FLT
 @itemx --rotate=FLT
-Rotate the input image by the given angle in degrees: @mymath{\theta} in
-@ref{Warping basics}. Note that commonly, the WCS structure of the image is
-set such that the RA is the inverse of the image horizontal axis which
-increases towards the right in the FITS standard and as viewed by SAO
-ds9. So the default center for rotation is on the right of the image. If
-you want to rotate about other points, you have to translate the warping
-center first (with @option{--translate}) then apply your rotation and then
-return the center back to the original position (with another call to
-@option{--translate}, see @ref{Merging multiple warpings}.
+Rotate the input image by the given angle in degrees: @mymath{\theta} in 
@ref{Warping basics}.
+Note that commonly, the WCS structure of the image is set such that the RA is 
the inverse of the image horizontal axis which increases towards the right in 
the FITS standard and as viewed by SAO ds9.
+So the default center for rotation is on the right of the image.
+If you want to rotate about other points, you have to translate the warping 
center first (with @option{--translate}) then apply your rotation and then 
return the center back to the original position (with another call to 
@option{--translate}, see @ref{Merging multiple warpings}.
 
 @item -s FLT[,FLT]
 @itemx --scale=FLT[,FLT]
-Scale the input image by the given factor(s): @mymath{M} and @mymath{N} in
-@ref{Warping basics}. If only one value is given, then both image axes will
-be scaled with the given value. When two values are given (separated by a
-comma), the first will be used to scale the first axis and the second will
-be used for the second axis. If you only need to scale one axis, use
-@option{1} for the axis you don't need to scale. The value(s) can also be
-written (on the command-line or in configuration files) as a fraction.
+Scale the input image by the given factor(s): @mymath{M} and @mymath{N} in 
@ref{Warping basics}.
+If only one value is given, then both image axes will be scaled with the given 
value.
+When two values are given (separated by a comma), the first will be used to 
scale the first axis and the second will be used for the second axis.
+If you only need to scale one axis, use @option{1} for the axis you don't need 
to scale.
+The value(s) can also be written (on the command-line or in configuration 
files) as a fraction.
 
 @item -f FLT[,FLT]
 @itemx --flip=FLT[,FLT]
-Flip the input image around the given axis(s). If only one value is given,
-then both image axes are flipped. When two values are given (separated by a
-comma), you can choose which axis to flip over. @option{--flip} only takes
-values @code{0} (for no flip), or @code{1} (for a flip). Hence, if you want
-to flip by the second axis only, use @option{--flip=0,1}.
+Flip the input image around the given axis(s).
+If only one value is given, then both image axes are flipped.
+When two values are given (separated by acomma), you can choose which axis to 
flip over.
+@option{--flip} only takes values @code{0} (for no flip), or @code{1} (for a 
flip).
+Hence, if you want to flip by the second axis only, use @option{--flip=0,1}.
 
 @item -e FLT[,FLT]
 @itemx --shear=FLT[,FLT]
-Shear the input image by the given value(s): @mymath{A} and @mymath{B} in
-@ref{Warping basics}. If only one value is given, then both image axes will
-be sheared with the given value. When two values are given (separated by a
-comma), the first will be used to shear the first axis and the second will
-be used for the second axis. If you only need to shear along one axis, use
-@option{0} for the axis that must be untouched. The value(s) can also be
-written (on the command-line or in configuration files) as a fraction.
+Shear the input image by the given value(s): @mymath{A} and @mymath{B} in 
@ref{Warping basics}.
+If only one value is given, then both image axes will be sheared with the 
given value.
+When two values are given (separated by a comma), the first will be used to 
shear the first axis and the second will be used for the second axis.
+If you only need to shear along one axis, use @option{0} for the axis that 
must be untouched.
+The value(s) can also be written (on the command-line or in configuration 
files) as a fraction.
 
 @item -t FLT[,FLT]
 @itemx --translate=FLT[,FLT]
-Translate (move the center of coordinates) the input image by the given
-value(s): @mymath{c} and @mymath{f} in @ref{Warping basics}. If only one
-value is given, then both image axes will be translated by the given
-value. When two values are given (separated by a comma), the first will be
-used to translate the first axis and the second will be used for the second
-axis. If you only need to translate along one axis, use @option{0} for the
-axis that must be untouched. The value(s) can also be written (on the
-command-line or in configuration files) as a fraction.
+Translate (move the center of coordinates) the input image by the given 
value(s): @mymath{c} and @mymath{f} in @ref{Warping basics}.
+If only one value is given, then both image axes will be translated by the 
given value.
+When two values are given (separated by a comma), the first will be used to 
translate the first axis and the second will be used for the second axis.
+If you only need to translate along one axis, use @option{0} for the axis that 
must be untouched.
+The value(s) can also be written (on the command-line or in configuration 
files) as a fraction.
 
 @item -p FLT[,FLT]
 @itemx --project=FLT[,FLT]
-Apply a projection to the input image by the given values(s): @mymath{g}
-and @mymath{h} in @ref{Warping basics}. If only one value is given, then
-projection will apply to both axes with the given value. When two values
-are given (separated by a comma), the first will be used to project the
-first axis and the second will be used for the second axis. If you only
-need to project along one axis, use @option{0} for the axis that must be
-untouched. The value(s) can also be written (on the command-line or in
-configuration files) as a fraction.
+Apply a projection to the input image by the given values(s): @mymath{g} and 
@mymath{h} in @ref{Warping basics}.
+If only one value is given, then projection will apply to both axes with the 
given value.
+When two values are given (separated by a comma), the first will be used to 
project the first axis and the second will be used for the second axis.
+If you only need to project along one axis, use @option{0} for the axis that 
must be untouched.
+The value(s) can also be written (on the command-line or in configuration 
files) as a fraction.
 
 @item -m STR
 @itemx --matrix=STR
-The warp/transformation matrix. All the elements in this matrix must be
-separated by comas(@key{,}) characters and as described above, you can also
-use fractions (a forward-slash between two numbers). The transformation
-matrix can be either a 2 by 2 (4 numbers), or a 3 by 3 (9 numbers)
-array. In the former case (if a 2 by 2 matrix is given), then it is put
-into a 3 by 3 matrix (see @ref{Warping basics}).
+The warp/transformation matrix.
+All the elements in this matrix must be separated by comas(@key{,}) characters 
and as described above, you can also use fractions (a forward-slash between two 
numbers).
+The transformation matrix can be either a 2 by 2 (4 numbers), or a 3 by 3 (9 
numbers) array.
+In the former case (if a 2 by 2 matrix is given), then it is put into a 3 by 3 
matrix (see @ref{Warping basics}).
 
 @cindex NaN
-The determinant of the matrix has to be non-zero and it must not
-contain any non-number values (for example infinities or NaNs). The
-elements of the matrix have to be written row by row. So for the
-general Homography matrix of @ref{Warping basics}, it should be called
-with @command{--matrix=a,b,c,d,e,f,g,h,1}.
-
-The raw matrix takes precedence over all the modular warping options listed
-above, so if it is called with any number of modular warps, the latter are
-ignored.
+The determinant of the matrix has to be non-zero and it must not contain any 
non-number values (for example infinities or NaNs).
+The elements of the matrix have to be written row by row.
+So for the general Homography matrix of @ref{Warping basics}, it should be 
called with @command{--matrix=a,b,c,d,e,f,g,h,1}.
+
+The raw matrix takes precedence over all the modular warping options listed 
above, so if it is called with any number of modular warps, the latter are 
ignored.
 
 @item -c
 @itemx --centeroncorer
-Put the center of coordinates on the corner of the first (bottom-left when
-viewed in SAO ds9) pixel. This option is applied after the final warping
-matrix has been finalized: either through modular warpings or the raw
-matrix. See the explanation above for coordinates in the FITS standard to
-better understand this option and when it should be used.
+Put the center of coordinates on the corner of the first (bottom-left when 
viewed in SAO ds9) pixel.
+This option is applied after the final warping matrix has been finalized: 
either through modular warpings or the raw matrix.
+See the explanation above for coordinates in the FITS standard to better 
understand this option and when it should be used.
 
 @item --hstartwcs=INT
-Specify the first header keyword number (line) that should be used to read
-the WCS information, see the full explanation in @ref{Invoking astcrop}.
+Specify the first header keyword number (line) that should be used to read the 
WCS information, see the full explanation in @ref{Invoking astcrop}.
 
 @item --hendwcs=INT
-Specify the last header keyword number (line) that should be used to read
-the WCS information, see the full explanation in @ref{Invoking astcrop}.
+Specify the last header keyword number (line) that should be used to read the 
WCS information, see the full explanation in @ref{Invoking astcrop}.
 
 @item -k
 @itemx --keepwcs
 @cindex WCSLIB
 @cindex World Coordinate System
-Do not correct the WCS information of the input image and save it untouched
-to the output image. By default the WCS (World Coordinate System)
-information of the input image is going to be corrected in the output image
-so the objects in the image are at the same WCS coordinates. But in some
-cases it might be useful to keep it unchanged (for example to correct
-alignments).
+Do not correct the WCS information of the input image and save it untouched to 
the output image.
+By default the WCS (World Coordinate System) information of the input image is 
going to be corrected in the output image so the objects in the image are at 
the same WCS coordinates.
+But in some cases it might be useful to keep it unchanged (for example to 
correct alignments).
 
 @item -C FLT
 @itemx --coveredfrac=FLT
-Depending on the warp, the output pixels that cover pixels on the edge of
-the input image, or blank pixels in the input image, are not going to be
-fully covered by input data. With this option, you can specify the
-acceptable covered fraction of such pixels (any value between 0 and 1). If
-you only want output pixels that are fully covered by the input image area
-(and are not blank), then you can set
-@option{--coveredfrac=1}. Alternatively, a value of @code{0} will keep
-output pixels that are even infinitesimally covered by the input(so the sum
-of the pixels in the input and output images will be the same).
+Depending on the warp, the output pixels that cover pixels on the edge of the 
input image, or blank pixels in the input image, are not going to be fully 
covered by input data.
+With this option, you can specify the acceptable covered fraction of such 
pixels (any value between 0 and 1).
+If you only want output pixels that are fully covered by the input image area 
(and are not blank), then you can set @option{--coveredfrac=1}.
+Alternatively, a value of @code{0} will keep output pixels that are even 
infinitesimally covered by the input(so the sum of the pixels in the input and 
output images will be the same).
 
 @end table
 
@@ -15917,12 +11772,8 @@ of the pixels in the input and output images will be 
the same).
 @node Data analysis, Modeling and fittings, Data manipulation, Top
 @chapter Data analysis
 
-Astronomical datasets (images or tables) contain very valuable information,
-the tools in this section can help in analyzing, extracting, and
-quantifying that information. For example getting general or specific
-statistics of the dataset (with @ref{Statistics}), detecting signal within
-a noisy dataset (with @ref{NoiseChisel}), or creating a catalog from an
-input dataset (with @ref{MakeCatalog}).
+Astronomical datasets (images or tables) contain very valuable information, 
the tools in this section can help in analyzing, extracting, and quantifying 
that information.
+For example getting general or specific statistics of the dataset (with 
@ref{Statistics}), detecting signal within a noisy dataset (with 
@ref{NoiseChisel}), or creating a catalog from an input dataset (with 
@ref{MakeCatalog}).
 
 @menu
 * Statistics::                  Calculate dataset statistics.
@@ -15935,24 +11786,15 @@ input dataset (with @ref{MakeCatalog}).
 @node Statistics, NoiseChisel, Data analysis, Data analysis
 @section Statistics
 
-The distribution of values in a dataset can provide valuable information
-about it. For example, in an image, if it is a positively skewed
-distribution, we can see that there is significant data in the image. If
-the distribution is roughly symmetric, we can tell that there is no
-significant data in the image. In a table, when we need to select a sample
-of objects, it is important to first get a general view of the whole
-sample.
-
-On the other hand, you might need to know certain statistical parameters of
-the dataset. For example, if we have run a detection algorithm on an image,
-and we want to see how accurate it was, one method is to calculate the
-average of the undetected pixels and see how reasonable it is (if detection
-is done correctly, the average of undetected pixels should be approximately
-equal to the background value, see @ref{Sky value}). In a table, you might
-have calculated the magnitudes of a certain class of objects and want to get
-some general characteristics of the distribution immediately on the
-command-line (very fast!), to possibly change some parameters. The
-Statistics program is designed for such situations.
+The distribution of values in a dataset can provide valuable information about 
it.
+For example, in an image, if it is a positively skewed distribution, we can 
see that there is significant data in the image.
+If the distribution is roughly symmetric, we can tell that there is no 
significant data in the image.
+In a table, when we need to select a sample of objects, it is important to 
first get a general view of the whole sample.
+
+On the other hand, you might need to know certain statistical parameters of 
the dataset.
+For example, if we have run a detection algorithm on an image, and we want to 
see how accurate it was, one method is to calculate the average of the 
undetected pixels and see how reasonable it is (if detection is done correctly, 
the average of undetected pixels should be approximately equal to the 
background value, see @ref{Sky value}).
+In a table, you might have calculated the magnitudes of a certain class of 
objects and want to get some general characteristics of the distribution 
immediately on the command-line (very fast!), to possibly change some 
parameters.
+The Statistics program is designed for such situations.
 
 @menu
 * Histogram and Cumulative Frequency Plot::  Basic definitions.
@@ -15966,112 +11808,67 @@ Statistics program is designed for such situations.
 @subsection Histogram and Cumulative Frequency Plot
 
 @cindex Histogram
-Histograms and the cumulative frequency plots are both used to visually
-study the distribution of a dataset. A histogram shows the number of data
-points which lie within pre-defined intervals (bins). So on the horizontal
-axis we have the bin centers and on the vertical, the number of points that
-are in that bin. You can use it to get a general view of the distribution:
-which values have been repeated the most? how close/far are the most
-significant bins?  Are there more values in the larger part of the range of
-the dataset, or in the lower part?  Similarly, many very important
-properties about the dataset can be deduced from a visual inspection of the
-histogram. In the Statistics program, the histogram can be either output to
-a table to plot with your favorite plotting program@footnote{We recommend
-@url{http://pgfplots.sourceforge.net/,PGFPlots} which generates your plots
-directly within @TeX{} (the same tool that generates your document).}, or
-it can be shown with ASCII characters on the command-line, which is very
-crude, but good enough for a fast and on-the-go analysis, see the example
-in @ref{Invoking aststatistics}.
+Histograms and the cumulative frequency plots are both used to visually study 
the distribution of a dataset.
+A histogram shows the number of data points which lie within pre-defined 
intervals (bins).
+So on the horizontal axis we have the bin centers and on the vertical, the 
number of points that are in that bin.
+You can use it to get a general view of the distribution: which values have 
been repeated the most? how close/far are the most significant bins?  Are there 
more values in the larger part of the range of the dataset, or in the lower 
part?  Similarly, many very important properties about the dataset can be 
deduced from a visual inspection of the histogram.
+In the Statistics program, the histogram can be either output to a table to 
plot with your favorite plotting program@footnote{
+We recommend @url{http://pgfplots.sourceforge.net/,PGFPlots} which generates 
your plots directly within @TeX{} (the same tool that generates your 
document).},
+or it can be shown with ASCII characters on the command-line, which is very 
crude, but good enough for a fast and on-the-go analysis, see the example in 
@ref{Invoking aststatistics}.
 
 @cindex Intervals, histogram
 @cindex Bin width, histogram
 @cindex Normalizing histogram
 @cindex Probability density function
-The width of the bins is only necessary parameter for a histogram. In the
-limiting case that the bin-widths tend to zero (while assuming the number
-of points in the dataset tend to infinity), then the histogram will tend to
-the @url{https://en.wikipedia.org/wiki/Probability_density_function,
-probability density function} of the distribution. When the absolute number
-of points in each bin is not relevant to the study (only the shape of the
-histogram is important), you can @emph{normalize} a histogram so like the
-probability density function, the sum of all its bins will be one.
+The width of the bins is only necessary parameter for a histogram.
+In the limiting case that the bin-widths tend to zero (while assuming the 
number of points in the dataset tend to infinity), then the histogram will tend 
to the @url{https://en.wikipedia.org/wiki/Probability_density_function, 
probability density function} of the distribution.
+When the absolute number of points in each bin is not relevant to the study 
(only the shape of the histogram is important), you can @emph{normalize} a 
histogram so like the probability density function, the sum of all its bins 
will be one.
 
 @cindex Cumulative Frequency Plot
-In the cumulative frequency plot of a distribution, the horizontal axis is
-the sorted data values and the y axis is the index of each data in the
-sorted distribution. Unlike a histogram, a cumulative frequency plot does
-not involve intervals or bins. This makes it less prone to any sort of bias
-or error that a given bin-width would have on the analysis. When a larger
-number of the data points have roughly the same value, then the cumulative
-frequency plot will become steep in that vicinity. This occurs because on
-the horizontal axis, there is little change while on the vertical axis, the
-indexes constantly increase. Normalizing a cumulative frequency plot means
-to divide each index (y axis) by the total number of data points (or the
-last value).
-
-Unlike the histogram which has a limited number of bins, ideally the
-cumulative frequency plot should have one point for every data
-element. Even in small datasets (for example a @mymath{200\times200} image)
-this will result in an unreasonably large number of points to plot (40000)!
-As a result, for practical reasons, it is common to only store its value on
-a certain number of points (intervals) in the input range rather than the
-whole dataset, so you should determine the number of bins you want when
-asking for a cumulative frequency plot. In Gnuastro (and thus the
-Statistics program), the number reported for each bin is the total number
-of data points until the larger interval value for that bin. You can see an
-example histogram and cumulative frequency plot of a single dataset under
-the @option{--asciihist} and @option{--asciicfp} options of @ref{Invoking
-aststatistics}.
-
-So as a summary, both the histogram and cumulative frequency plot in
-Statistics will work with bins. Within each bin/interval, the lower value
-is considered to be within then bin (it is inclusive), but its larger value
-is not (it is exclusive). Formally, an interval/bin between a and b is
-represented by [a, b). When the over-all range of the dataset is specified
-(with the @option{--greaterequal}, @option{--lessthan}, or
-@option{--qrange} options), the acceptable values of the dataset are also
-defined with a similar inclusive-exclusive manner. But when the range is
-determined from the actual dataset (none of these options is called), the
-last element in the dataset is included in the last bin's count.
+In the cumulative frequency plot of a distribution, the horizontal axis is the 
sorted data values and the y axis is the index of each data in the sorted 
distribution.
+Unlike a histogram, a cumulative frequency plot does not involve intervals or 
bins.
+This makes it less prone to any sort of bias or error that a given bin-width 
would have on the analysis.
+When a larger number of the data points have roughly the same value, then the 
cumulative frequency plot will become steep in that vicinity.
+This occurs because on the horizontal axis, there is little change while on 
the vertical axis, the indexes constantly increase.
+Normalizing a cumulative frequency plot means to divide each index (y axis) by 
the total number of data points (or the last value).
+
+Unlike the histogram which has a limited number of bins, ideally the 
cumulative frequency plot should have one point for every data element.
+Even in small datasets (for example a @mymath{200\times200} image) this will 
result in an unreasonably large number of points to plot (40000)! As a result, 
for practical reasons, it is common to only store its value on a certain number 
of points (intervals) in the input range rather than the whole dataset, so you 
should determine the number of bins you want when asking for a cumulative 
frequency plot.
+In Gnuastro (and thus the Statistics program), the number reported for each 
bin is the total number of data points until the larger interval value for that 
bin.
+You can see an example histogram and cumulative frequency plot of a single 
dataset under the @option{--asciihist} and @option{--asciicfp} options of 
@ref{Invoking aststatistics}.
+
+So as a summary, both the histogram and cumulative frequency plot in 
Statistics will work with bins.
+Within each bin/interval, the lower value is considered to be within then bin 
(it is inclusive), but its larger value is not (it is exclusive).
+Formally, an interval/bin between a and b is represented by [a, b).
+When the over-all range of the dataset is specified (with the 
@option{--greaterequal}, @option{--lessthan}, or @option{--qrange} options), 
the acceptable values of the dataset are also defined with a similar 
inclusive-exclusive manner.
+But when the range is determined from the actual dataset (none of these 
options is called), the last element in the dataset is included in the last 
bin's count.
 
 
 @node  Sigma clipping, Sky value, Histogram and Cumulative Frequency Plot, 
Statistics
 @subsection Sigma clipping
 
-Let's assume that you have pure noise (centered on zero) with a clear
-@url{https://en.wikipedia.org/wiki/Normal_distribution,Gaussian
-distribution}, or see @ref{Photon counting noise}. Now let's assume you add
-very bright objects (signal) on the image which have a very sharp
-boundary. By a sharp boundary, we mean that there is a clear cutoff (from
-the noise) at the pixels the objects finish. In other words, at their
-boundaries, the objects do not fade away into the noise. In such a case,
-when you plot the histogram (see @ref{Histogram and Cumulative Frequency
-Plot}) of the distribution, the pixels relating to those objects will be
-clearly separate from pixels that belong to parts of the image that did not
-have any signal (were just noise). In the cumulative frequency plot, after
-a steady rise (due to the noise), you would observe a long flat region were
-for a certain range of data (horizontal axis), there is no increase in the
-index (vertical axis).
+Let's assume that you have pure noise (centered on zero) with a clear 
@url{https://en.wikipedia.org/wiki/Normal_distribution,Gaussian distribution}, 
or see @ref{Photon counting noise}.
+Now let's assume you add very bright objects (signal) on the image which have 
a very sharp boundary.
+By a sharp boundary, we mean that there is a clear cutoff (from the noise) at 
the pixels the objects finish.
+In other words, at their boundaries, the objects do not fade away into the 
noise.
+In such a case, when you plot the histogram (see @ref{Histogram and Cumulative 
Frequency Plot}) of the distribution, the pixels relating to those objects will 
be clearly separate from pixels that belong to parts of the image that did not 
have any signal (were just noise).
+In the cumulative frequency plot, after a steady rise (due to the noise), you 
would observe a long flat region were for a certain range of data (horizontal 
axis), there is no increase in the index (vertical axis).
 
 @cindex Blurring
 @cindex Cosmic rays
 @cindex Aperture blurring
 @cindex Atmosphere blurring
-Outliers like the example above can significantly bias the measurement of
-noise statistics. @mymath{\sigma}-clipping is defined as a way to avoid the
-effect of such outliers. In astronomical applications, cosmic rays (when
-they collide at a near normal incidence angle) are a very good example of
-such outliers. The tracks they leave behind in the image are perfectly
-immune to the blurring caused by the atmosphere and the aperture. They are
-also very energetic and so their borders are usually clearly separated from
-the surrounding noise. So @mymath{\sigma}-clipping is very useful in
-removing their effect on the data. See Figure 15 in Akhlaghi and Ichikawa,
-@url{https://arxiv.org/abs/1505.01664,2015}.
-
-@mymath{\sigma}-clipping is defined as the very simple iteration below. In
-each iteration, the range of input data might decrease and so when the
-outliers have the conditions above, the outliers will be removed through
-this iteration. The exit criteria will be discussed below.
+Outliers like the example above can significantly bias the measurement of 
noise statistics.
+@mymath{\sigma}-clipping is defined as a way to avoid the effect of such 
outliers.
+In astronomical applications, cosmic rays (when they collide at a near normal 
incidence angle) are a very good example of such outliers.
+The tracks they leave behind in the image are perfectly immune to the blurring 
caused by the atmosphere and the aperture.
+They are also very energetic and so their borders are usually clearly 
separated from the surrounding noise.
+So @mymath{\sigma}-clipping is very useful in removing their effect on the 
data.
+See Figure 15 in Akhlaghi and Ichikawa, 
@url{https://arxiv.org/abs/1505.01664,2015}.
+
+@mymath{\sigma}-clipping is defined as the very simple iteration below.
+In each iteration, the range of input data might decrease and so when the 
outliers have the conditions above, the outliers will be removed through this 
iteration.
+The exit criteria will be discussed below.
 
 @enumerate
 @item
@@ -16085,49 +11882,36 @@ Go back to step 1, unless the selected exit criteria 
is reached.
 @end enumerate
 
 @noindent
-The reason the median is used as a reference and not the mean is that the
-mean is too significantly affected by the presence of outliers, while the
-median is less affected, see @ref{Quantifying signal in a tile}. As you can
-tell from this algorithm, besides the condition above (that the signal have
-clear high signal to noise boundaries) @mymath{\sigma}-clipping is only
-useful when the signal does not cover more than half of the full data
-set. If they do, then the median will lie over the outliers and
-@mymath{\sigma}-clipping might remove the pixels with no signal.
+The reason the median is used as a reference and not the mean is that the mean 
is too significantly affected by the presence of outliers, while the median is 
less affected, see @ref{Quantifying signal in a tile}.
+As you can tell from this algorithm, besides the condition above (that the 
signal have clear high signal to noise boundaries) @mymath{\sigma}-clipping is 
only useful when the signal does not cover more than half of the full data set.
+If they do, then the median will lie over the outliers and 
@mymath{\sigma}-clipping might remove the pixels with no signal.
 
 There are commonly two exit criteria to stop the @mymath{\sigma}-clipping
 iteration:
 
 @itemize
 @item
-When a certain number of iterations has taken place (second value to the
-@option{--sclipparams} option is larger than 1).
+When a certain number of iterations has taken place (second value to the 
@option{--sclipparams} option is larger than 1).
 @item
-When the new measured standard deviation is within a certain tolerance
-level of the old one (second value to the @option{--sclipparams} option is
-less than 1). The tolerance level is defined by:
+When the new measured standard deviation is within a certain tolerance level 
of the old one (second value to the @option{--sclipparams} option is less than 
1).
+The tolerance level is defined by:
 
 @dispmath{\sigma_{old}-\sigma_{new} \over \sigma_{new}}
 
-The standard deviation is used because it is heavily influenced by the
-presence of outliers. Therefore the fact that it stops changing between two
-iterations is a sign that we have successfully removed outliers. Note that
-in each clipping, the dispersion in the distribution is either less or
-equal. So @mymath{\sigma_{old}\geq\sigma_{new}}.
+The standard deviation is used because it is heavily influenced by the 
presence of outliers.
+Therefore the fact that it stops changing between two iterations is a sign 
that we have successfully removed outliers.
+Note that in each clipping, the dispersion in the distribution is either less 
or equal.
+So @mymath{\sigma_{old}\geq\sigma_{new}}.
 @end itemize
 
 @cartouche
 @noindent
-When working on astronomical images, objects like galaxies and stars are
-blurred by the atmosphere and the telescope aperture, therefore their
-signal sinks into the noise very gradually. Galaxies in particular do not
-appear to have a clear high signal to noise cutoff at all. Therefore
-@mymath{\sigma}-clipping will not be useful in removing their effect on the
-data.
-
-To gauge if @mymath{\sigma}-clipping will be useful for your dataset, look
-at the histogram (see @ref{Histogram and Cumulative Frequency Plot}). The
-ASCII histogram that is printed on the command-line with
-@option{--asciihist} is good enough in most cases.
+When working on astronomical images, objects like galaxies and stars are 
blurred by the atmosphere and the telescope aperture, therefore their signal 
sinks into the noise very gradually.
+Galaxies in particular do not appear to have a clear high signal to noise 
cutoff at all.
+Therefore @mymath{\sigma}-clipping will not be useful in removing their effect 
on the data.
+
+To gauge if @mymath{\sigma}-clipping will be useful for your dataset, look at 
the histogram (see @ref{Histogram and Cumulative Frequency Plot}).
+The ASCII histogram that is printed on the command-line with 
@option{--asciihist} is good enough in most cases.
 @end cartouche
 
 
@@ -16137,33 +11921,19 @@ ASCII histogram that is printed on the command-line 
with
 @subsection Sky value
 
 @cindex Sky
-One of the most important aspects of a dataset is its reference value: the
-value of the dataset where there is no signal. Without knowing, and thus
-removing the effect of, this value it is impossible to compare the derived
-results of many high-level analyses over the dataset with other datasets
-(in the attempt to associate our results with the ``real'' world).
-
-In astronomy, this reference value is known as the ``Sky'' value: the value
-that noise fluctuates around: where there is no signal from detectable
-objects or artifacts (for example galaxies, stars, planets or comets, star
-spikes or internal optical ghost). Depending on the dataset, the Sky value
-maybe a fixed value over the whole dataset, or it may vary based on
-location. For an example of the latter case, see Figure 11 in
-@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa (2015)}.
-
-Because of the significance of the Sky value in astronomical data analysis,
-we have devoted this subsection to it for a thorough review. We start with
-a thorough discussion on its definition (@ref{Sky value definition}). In
-the astronomical literature, researchers use a variety of methods to
-estimate the Sky value, so in @ref{Sky value misconceptions}) we review
-those and discuss their biases. From the definition of the Sky value, the
-most accurate way to estimate the Sky value is to run a detection algorithm
-(for example @ref{NoiseChisel}) over the dataset and use the undetected
-pixels. However, there is also a more crude method that maybe useful when
-good direct detection is not initially possible (for example due to too
-many cosmic rays in a shallow image). A more crude (but simpler method)
-that is usable in such situations is discussed in @ref{Quantifying signal
-in a tile}.
+One of the most important aspects of a dataset is its reference value: the 
value of the dataset where there is no signal.
+Without knowing, and thus removing the effect of, this value it is impossible 
to compare the derived results of many high-level analyses over the dataset 
with other datasets (in the attempt to associate our results with the ``real'' 
world).
+
+In astronomy, this reference value is known as the ``Sky'' value: the value 
that noise fluctuates around: where there is no signal from detectable objects 
or artifacts (for example galaxies, stars, planets or comets, star spikes or 
internal optical ghost).
+Depending on the dataset, the Sky value maybe a fixed value over the whole 
dataset, or it may vary based on location.
+For an example of the latter case, see Figure 11 in 
@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa (2015)}.
+
+Because of the significance of the Sky value in astronomical data analysis, we 
have devoted this subsection to it for a thorough review.
+We start with a thorough discussion on its definition (@ref{Sky value 
definition}).
+In the astronomical literature, researchers use a variety of methods to 
estimate the Sky value, so in @ref{Sky value misconceptions}) we review those 
and discuss their biases.
+From the definition of the Sky value, the most accurate way to estimate the 
Sky value is to run a detection algorithm (for example @ref{NoiseChisel}) over 
the dataset and use the undetected pixels.
+However, there is also a more crude method that maybe useful when good direct 
detection is not initially possible (for example due to too many cosmic rays in 
a shallow image).
+A more crude (but simpler method) that is usable in such situations is 
discussed in @ref{Quantifying signal in a tile}.
 
 @menu
 * Sky value definition::        Definition of the Sky/reference value.
@@ -16175,17 +11945,10 @@ in a tile}.
 @subsubsection Sky value definition
 
 @cindex Sky value
-This analysis is taken from @url{https://arxiv.org/abs/1505.01664, Akhlaghi
-and Ichikawa (2015)}. Let's assume that all instrument defects -- bias,
-dark and flat -- have been corrected and the brightness (see @ref{Flux
-Brightness and magnitude}) of a detected object, @mymath{O}, is
-desired. The sources of flux on pixel@footnote{For this analysis the
-dimension of the data (image) is irrelevant. So if the data is an image
-(2D) with width of @mymath{w} pixels, then a pixel located on column
-@mymath{x} and row @mymath{y} (where all counting starts from zero and (0,
-0) is located on the bottom left corner of the image), would have an index:
-@mymath{i=x+y\times{}w}.} @mymath{i} of the image can be written as
-follows:
+This analysis is taken from @url{https://arxiv.org/abs/1505.01664, Akhlaghi 
and Ichikawa (2015)}.
+Let's assume that all instrument defects -- bias, dark and flat -- have been 
corrected and the brightness (see @ref{Flux Brightness and magnitude}) of a 
detected object, @mymath{O}, is desired.
+The sources of flux on pixel@footnote{For this analysis the dimension of the 
data (image) is irrelevant.
+So if the data is an image (2D) with width of @mymath{w} pixels, then a pixel 
located on column @mymath{x} and row @mymath{y} (where all counting starts from 
zero and (0, 0) is located on the bottom left corner of the image), would have 
an index: @mymath{i=x+y\times{}w}.} @mymath{i} of the image can be written as 
follows:
 
 @itemize
 @item
@@ -16193,15 +11956,13 @@ Contribution from the target object (@mymath{O_i}).
 @item
 Contribution from other detected objects (@mymath{D_i}).
 @item
-Undetected objects or the fainter undetected regions of bright
-objects (@mymath{U_i}).
+Undetected objects or the fainter undetected regions of bright objects 
(@mymath{U_i}).
 @item
 @cindex Cosmic rays
 A cosmic ray (@mymath{C_i}).
 @item
 @cindex Background flux
-The background flux, which is defined to be the count if none of the
-others exists on that pixel (@mymath{B_i}).
+The background flux, which is defined to be the count if none of the others 
exists on that pixel (@mymath{B_i}).
 @end itemize
 @noindent
 The total flux in this pixel (@mymath{T_i}) can thus be written as:
@@ -16210,51 +11971,34 @@ The total flux in this pixel (@mymath{T_i}) can thus 
be written as:
 
 @cindex Cosmic ray removal
 @noindent
-By definition, @mymath{D_i} is detected and it can be assumed that it is
-correctly estimated (deblended) and subtracted, we can thus set
-@mymath{D_i=0}. There are also methods to detect and remove cosmic rays,
-for example the method described in van Dokkum (2001)@footnote{van Dokkum,
-P. G. (2001). Publications of the Astronomical Society of the Pacific.
-113, 1420.}, or by comparing multiple exposures. This allows us to set
-@mymath{C_i=0}. Note that in practice, @mymath{D_i} and @mymath{U_i} are
-correlated, because they both directly depend on the detection algorithm
-and its input parameters. Also note that no detection or cosmic ray removal
-algorithm is perfect. With these limitations in mind, the observed Sky
-value for this pixel (@mymath{S_i}) can be defined as
+By definition, @mymath{D_i} is detected and it can be assumed that it is 
correctly estimated (deblended) and subtracted, we can thus set @mymath{D_i=0}.
+There are also methods to detect and remove cosmic rays, for example the 
method described in van Dokkum (2001)@footnote{van Dokkum, P. G. (2001).
+Publications of the Astronomical Society of the Pacific. 113, 1420.}, or by 
comparing multiple exposures.
+This allows us to set @mymath{C_i=0}.
+Note that in practice, @mymath{D_i} and @mymath{U_i} are correlated, because 
they both directly depend on the detection algorithm and its input parameters.
+Also note that no detection or cosmic ray removal algorithm is perfect.
+With these limitations in mind, the observed Sky value for this pixel 
(@mymath{S_i}) can be defined as
 
 @cindex Sky value
 @dispmath{S_i\equiv{}B_i+U_i.}
 
 @noindent
-Therefore, as the detection process (algorithm and input parameters)
-becomes more accurate, or @mymath{U_i\to0}, the Sky value will tend to the
-background value or @mymath{S_i\to B_i}. Hence, we see that while
-@mymath{B_i} is an inherent property of the data (pixel in an image),
-@mymath{S_i} depends on the detection process. Over a group of pixels, for
-example in an image or part of an image, this equation translates to the
-average of undetected pixels (Sky@mymath{=\sum{S_i}}).  With this
-definition of Sky, the object flux in the data can be calculated, per
-pixel, with
+Therefore, as the detection process (algorithm and input parameters) becomes 
more accurate, or @mymath{U_i\to0}, the Sky value will tend to the background 
value or @mymath{S_i\to B_i}.
+Hence, we see that while @mymath{B_i} is an inherent property of the data 
(pixel in an image), @mymath{S_i} depends on the detection process.
+Over a group of pixels, for example in an image or part of an image, this 
equation translates to the average of undetected pixels 
(Sky@mymath{=\sum{S_i}}).
+With this definition of Sky, the object flux in the data can be calculated, 
per pixel, with
 
 @dispmath{ T_{i}=S_{i}+O_{i} \quad\rightarrow\quad
            O_{i}=T_{i}-S_{i}.}
 
 @cindex photo-electrons
-In the fainter outskirts of an object, a very small fraction of the
-photo-electrons in a pixel actually belongs to objects, the rest is caused
-by random factors (noise), see Figure 1b in
-@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa
-(2015)}. Therefore even a small over estimation of the Sky value will
-result in the loss of a very large portion of most galaxies. Besides the
-lost area/brightness, this will also cause an over-estimation of the Sky
-value and thus even more under-estimation of the object's brightness. It is
-thus very important to detect the diffuse flux of a target, even if they
-are not your primary target.
-
-In summary, the more accurately the Sky is measured, the more accurately
-the brightness (sum of pixel values) of the target object can be measured
-(photometry). Any under/over-estimation in the Sky will directly translate
-to an over/under-estimation of the measured object's brightness.
+In the fainter outskirts of an object, a very small fraction of the 
photo-electrons in a pixel actually belongs to objects, the rest is caused by 
random factors (noise), see Figure 1b in @url{https://arxiv.org/abs/1505.01664, 
Akhlaghi and Ichikawa (2015)}.
+Therefore even a small over estimation of the Sky value will result in the 
loss of a very large portion of most galaxies.
+Besides the lost area/brightness, this will also cause an over-estimation of 
the Sky value and thus even more under-estimation of the object's brightness.
+It is thus very important to detect the diffuse flux of a target, even if they 
are not your primary target.
+
+In summary, the more accurately the Sky is measured, the more accurately the 
brightness (sum of pixel values) of the target object can be measured 
(photometry).
+Any under/over-estimation in the Sky will directly translate to an 
over/under-estimation of the measured object's brightness.
 
 @cartouche
 @noindent
@@ -16268,24 +12012,15 @@ objects (@mymath{D_i} and @mymath{C_i}) have been 
removed from the data.
 @node Sky value misconceptions, Quantifying signal in a tile, Sky value 
definition, Sky value
 @subsubsection Sky value misconceptions
 
-As defined in @ref{Sky value}, the sky value is only accurately defined
-when the detection algorithm is not significantly reliant on the sky
-value. In particular its detection threshold. However, most signal-based
-detection tools@footnote{According to Akhlaghi and Ichikawa (2015),
-signal-based detection is a detection process that relies heavily on
-assumptions about the to-be-detected objects. This method was the most
-heavily used technique prior to the introduction of NoiseChisel in that
-paper.} use the sky value as a reference to define the detection
-threshold. These older techniques therefore had to rely on approximations
-based on other assumptions about the data. A review of those other
-techniques can be seen in Appendix A of
-@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa (2015)}.
-
-These methods were extensively used in astronomical data analysis for
-several decades, therefore they have given rise to a lot of misconceptions,
-ambiguities and disagreements about the sky value and how to measure it. As
-a summary, the major methods used until now were an approximation of the
-mode of the image pixel distribution and @mymath{\sigma}-clipping.
+As defined in @ref{Sky value}, the sky value is only accurately defined when 
the detection algorithm is not significantly reliant on the sky value.
+In particular its detection threshold.
+However, most signal-based detection tools@footnote{According to Akhlaghi and 
Ichikawa (2015), signal-based detection is a detection process that relies 
heavily on assumptions about the to-be-detected objects.
+This method was the most heavily used technique prior to the introduction of 
NoiseChisel in that paper.} use the sky value as a reference to define the 
detection threshold.
+These older techniques therefore had to rely on approximations based on other 
assumptions about the data.
+A review of those other techniques can be seen in Appendix A of 
@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa (2015)}.
+
+These methods were extensively used in astronomical data analysis for several 
decades, therefore they have given rise to a lot of misconceptions, ambiguities 
and disagreements about the sky value and how to measure it.
+As a summary, the major methods used until now were an approximation of the 
mode of the image pixel distribution and @mymath{\sigma}-clipping.
 
 @itemize
 @cindex Histogram
@@ -16293,30 +12028,22 @@ mode of the image pixel distribution and 
@mymath{\sigma}-clipping.
 @cindex Mode of a distribution
 @cindex Probability density function
 @item
-To find the mode of a distribution those methods would either have to
-assume (or find) a certain probability density function (PDF) or use the
-histogram. But astronomical datasets can have any distribution, making it
-almost impossible to define a generic function. Also, histogram-based
-results are very inaccurate (there is a large dispersion) and it depends on
-the histogram bin-widths. Generally, the mode of a distribution also shifts
-as signal is added. Therefore, even if it is accurately measured, the mode
-is a biased measure for the Sky value.
+To find the mode of a distribution those methods would either have to assume 
(or find) a certain probability density function (PDF) or use the histogram.
+But astronomical datasets can have any distribution, making it almost 
impossible to define a generic function.
+Also, histogram-based results are very inaccurate (there is a large 
dispersion) and it depends on the histogram bin-widths.
+Generally, the mode of a distribution also shifts as signal is added.
+Therefore, even if it is accurately measured, the mode is a biased measure for 
the Sky value.
 
 @cindex Sigma-clipping
 @item
-Another approach was to iteratively clip the brightest pixels in the image
-(which is known as @mymath{\sigma}-clipping). See @ref{Sigma clipping} for
-a complete explanation. @mymath{\sigma}-clipping is useful when there are
-clear outliers (an object with a sharp edge in an image for
-example). However, real astronomical objects have diffuse and faint wings
-that penetrate deeply into the noise, see Figure 1 in
-@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa (2015)}.
+Another approach was to iteratively clip the brightest pixels in the image 
(which is known as @mymath{\sigma}-clipping).
+See @ref{Sigma clipping} for a complete explanation.
+@mymath{\sigma}-clipping is useful when there are clear outliers (an object 
with a sharp edge in an image for example).
+However, real astronomical objects have diffuse and faint wings that penetrate 
deeply into the noise, see Figure 1 in @url{https://arxiv.org/abs/1505.01664, 
Akhlaghi and Ichikawa (2015)}.
 @end itemize
 
-As discussed in @ref{Sky value}, the sky value can only be correctly
-defined as the average of undetected pixels. Therefore all such
-approaches that try to approximate the sky value prior to detection
-are ultimately poor approximations.
+As discussed in @ref{Sky value}, the sky value can only be correctly defined 
as the average of undetected pixels.
+Therefore all such approaches that try to approximate the sky value prior to 
detection are ultimately poor approximations.
 
 
 
@@ -16327,136 +12054,81 @@ are ultimately poor approximations.
 @cindex Noise
 @cindex Signal
 @cindex Gaussian distribution
-Put simply, noise can be characterized with a certain spread about the
-measured value. In the Gaussian distribution (most commonly used to model
-noise) the spread is defined by the standard deviation about the
-characteristic mean.
-
-Let's start by clarifying some definitions first: @emph{Data} is defined as
-the combination of signal and noise (so a noisy image is one
-@emph{data}set). @emph{Signal} is defined as the mean of the noise on each
-element. We'll also assume that the @emph{background} (see @ref{Sky value
-definition}) is subtracted and is zero.
-
-When a data set doesn't have any signal (only noise), the mean, median and
-mode of the distribution are equal within statistical errors and
-approximately equal to the background value.  Signal always has a positive
-value and will never become negative, see Figure 1 in
-@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa
-(2015)}. Therefore, as more signal is added, the mean, median and mode of
-the dataset shift to the positive. The mean's shift is the largest. The
-median shifts less, since it is defined based on an ordered distribution
-and so is not affected by a small number of outliers. The distribution's
-mode shifts the least to the positive.
+Put simply, noise can be characterized with a certain spread about the 
measured value.
+In the Gaussian distribution (most commonly used to model noise) the spread is 
defined by the standard deviation about the characteristic mean.
+
+Let's start by clarifying some definitions first: @emph{Data} is defined as 
the combination of signal and noise (so a noisy image is one @emph{data}set).
+@emph{Signal} is defined as the mean of the noise on each element.
+We'll also assume that the @emph{background} (see @ref{Sky value definition}) 
is subtracted and is zero.
+
+When a data set doesn't have any signal (only noise), the mean, median and 
mode of the distribution are equal within statistical errors and approximately 
equal to the background value.
+Signal always has a positive value and will never become negative, see Figure 
1 in @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa (2015)}.
+Therefore, as more signal is added, the mean, median and mode of the dataset 
shift to the positive.
+The mean's shift is the largest.
+The median shifts less, since it is defined based on an ordered distribution 
and so is not affected by a small number of outliers.
+The distribution's mode shifts the least to the positive.
 
 @cindex Mode
 @cindex Median
-Inverting the argument above gives us a robust method to quantify the
-significance of signal in a dataset. Namely, when the mean and median of a
-distribution are approximately equal, or the mean's quantile is around 0.5,
-we can argue that there is no significant signal.
-
-To allow for gradients (which are commonly present in ground-based images),
-we can consider the image to be made of a grid of tiles (see
-@ref{Tessellation}@footnote{The options to customize the tessellation are
-discussed in @ref{Processing options}.}). Hence, from the difference of the
-mean and median on each tile, we can estimate the significance of signal in
-it. The median of a distribution is defined to be the value of the
-distribution's middle point after sorting (or 0.5 quantile). Thus, to
-estimate the presence of signal, we'll compare with the quantile of the
-mean with 0.5. If the absolute difference in a tile is larger than the
-value given to the @option{--meanmedqdiff} option, that tile will be
-ignored. You can read this option as ``mean-median-quantile-difference''.
-
-The raw dataset's distribution is noisy, so using the argument above on the
-raw input will give a noisy result. To decrease the noise/error in
-estimating the mode, we will use convolution (see @ref{Convolution
-process}). Convolution decreases the range of the dataset and enhances its
-skewness, See Section 3.1.1 and Figure 4 in
-@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa (2015)}. This
-enhanced skewness can be interpreted as an increase in the Signal to noise
-ratio of the objects buried in the noise. Therefore, to obtain an even
-better measure of the presence of signal in a tile, the mean and median
-discussed above are measured on the convolved image.
-
-Through the difference of the mean and median we have actually `detected'
-data in the distribution. However this ``detection'' was only based on the
-total distribution of the data in each tile (a much lower resolution). This
-is the main limitation of this technique. The best approach is thus to do
-detection over the dataset, mask all the detected pixels and use the
-undetected regions to estimate the sky and its standard deviation (possibly
-over a tessellation). This is how NoiseChisel works: it uses the argument
-above to find tiles that are used to find its thresholds. Several
-higher-level steps are done on the thresholded pixels to define the
-higher-level detections (see @ref{NoiseChisel}).
+Inverting the argument above gives us a robust method to quantify the 
significance of signal in a dataset.
+Namely, when the mean and median of a distribution are approximately equal, or 
the mean's quantile is around 0.5, we can argue that there is no significant 
signal.
+
+To allow for gradients (which are commonly present in ground-based images), we 
can consider the image to be made of a grid of tiles (see 
@ref{Tessellation}@footnote{The options to customize the tessellation are 
discussed in @ref{Processing options}.}).
+Hence, from the difference of the mean and median on each tile, we can 
estimate the significance of signal in it.
+The median of a distribution is defined to be the value of the distribution's 
middle point after sorting (or 0.5 quantile).
+Thus, to estimate the presence of signal, we'll compare with the quantile of 
the mean with 0.5.
+If the absolute difference in a tile is larger than the value given to the 
@option{--meanmedqdiff} option, that tile will be ignored.
+You can read this option as ``mean-median-quantile-difference''.
+
+The raw dataset's distribution is noisy, so using the argument above on the 
raw input will give a noisy result.
+To decrease the noise/error in estimating the mode, we will use convolution 
(see @ref{Convolution process}).
+Convolution decreases the range of the dataset and enhances its skewness, See 
Section 3.1.1 and Figure 4 in @url{https://arxiv.org/abs/1505.01664, Akhlaghi 
and Ichikawa (2015)}.
+This enhanced skewness can be interpreted as an increase in the Signal to 
noise ratio of the objects buried in the noise.
+Therefore, to obtain an even better measure of the presence of signal in a 
tile, the mean and median discussed above are measured on the convolved image.
+
+Through the difference of the mean and median we have actually `detected' data 
in the distribution.
+However this ``detection'' was only based on the total distribution of the 
data in each tile (a much lower resolution).
+This is the main limitation of this technique.
+The best approach is thus to do detection over the dataset, mask all the 
detected pixels and use the undetected regions to estimate the sky and its 
standard deviation (possibly over a tessellation).
+This is how NoiseChisel works: it uses the argument above to find tiles that 
are used to find its thresholds.
+Several higher-level steps are done on the thresholded pixels to define the 
higher-level detections (see @ref{NoiseChisel}).
 
 @cindex Cosmic rays
-There is one final hurdle: raw astronomical datasets are commonly peppered
-with Cosmic rays. Images of Cosmic rays aren't smoothed by the atmosphere
-or telescope aperture, so they have sharp boundaries. Also, since they
-don't occupy too many pixels, they don't affect the mode and median
-calculation. But their very high values can greatly bias the calculation of
-the mean (recall how the mean shifts the fastest in the presence of
-outliers), for example see Figure 15 in
-@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa (2015)}.
-
-The effect of outliers like cosmic rays on the mean and standard deviation
-can be removed through @mymath{\sigma}-clipping, see @ref{Sigma clipping}
-for a complete explanation. Therefore, after asserting that the mode and
-median are approximately equal in a tile (see @ref{Tessellation}), the
-final Sky value and its standard deviation are determined after
-@mymath{\sigma}-clipping with the @option{--sigmaclip} option.
-
-In the end, some of the tiles will pass the mean and median quantile
-difference test. However, prior to interpolating over the failed tiles,
-another point should be considered: large and extended galaxies, or bright
-stars, have wings which sink into the noise very gradually. In some cases,
-the gradient over these wings can be on scales that is larger than the
-tiles. The mean-median distance test will pass on such tiles and will cause
-a strong peak in the interpolated tile grid, see @ref{Detecting large
-extended targets}.
-
-The tiles that exist over the wings of large galaxies or bright stars are
-outliers in the distribution of tiles that passed the mean-median quantile
-distance test. Therefore, the final step of ``quantifying signal in a
-tile'' is to look at this distribution and remove the
-outliers. @mymath{\sigma}-clipping is a good solution for removing a few
-outliers, but the problem with outliers of this kind is that there may be
-many such tiles (depending on the large/bright stars/galaxies in the
-image). Therefore a novel outlier rejection algorithm will be used.
+There is one final hurdle: raw astronomical datasets are commonly peppered 
with Cosmic rays.
+Images of Cosmic rays aren't smoothed by the atmosphere or telescope aperture, 
so they have sharp boundaries.
+Also, since they don't occupy too many pixels, they don't affect the mode and 
median calculation.
+But their very high values can greatly bias the calculation of the mean 
(recall how the mean shifts the fastest in the presence of outliers), for 
example see Figure 15 in @url{https://arxiv.org/abs/1505.01664, Akhlaghi and 
Ichikawa (2015)}.
+
+The effect of outliers like cosmic rays on the mean and standard deviation can 
be removed through @mymath{\sigma}-clipping, see @ref{Sigma clipping} for a 
complete explanation.
+Therefore, after asserting that the mode and median are approximately equal in 
a tile (see @ref{Tessellation}), the final Sky value and its standard deviation 
are determined after @mymath{\sigma}-clipping with the @option{--sigmaclip} 
option.
+
+In the end, some of the tiles will pass the mean and median quantile 
difference test.
+However, prior to interpolating over the failed tiles, another point should be 
considered: large and extended galaxies, or bright stars, have wings which sink 
into the noise very gradually.
+In some cases, the gradient over these wings can be on scales that is larger 
than the tiles.
+The mean-median distance test will pass on such tiles and will cause a strong 
peak in the interpolated tile grid, see @ref{Detecting large extended targets}.
+
+The tiles that exist over the wings of large galaxies or bright stars are 
outliers in the distribution of tiles that passed the mean-median quantile 
distance test.
+Therefore, the final step of ``quantifying signal in a tile'' is to look at 
this distribution and remove the outliers.
+@mymath{\sigma}-clipping is a good solution for removing a few outliers, but 
the problem with outliers of this kind is that there may be many such tiles 
(depending on the large/bright stars/galaxies in the image).
+Therefore a novel outlier rejection algorithm will be used.
 
 @cindex Outliers
 @cindex Identifying outliers
-To identify the first outlier, we'll use the distribution of distances
-between sorted elements. If there are @mymath{N} successful tiles, for
-every tile, the distance between the adjacent @mymath{N/2} previous
-elements is found, giving a distribution containing @mymath{N/2-1}
-points. The @mymath{\sigma}-clipped median and standard deviation of this
-distribution is then found (@mymath{\sigma}-clipping is configured with
-@option{--outliersclip}). Finally, if the distance between the element and
-its previous element is more than @option{--outliersigma} multiples of the
-@mymath{\sigma}-clipped standard deviation added with the
-@mymath{\sigma}-clipped median, that element is considered an outlier and
-all tiles larger than that value are ignored.
-
-Formally, if we assume there are @mymath{N} elements. They are first
-sorted. Searching for the outlier starts on element @mymath{N/2} (integer
-division). Let's take @mymath{v_i} to be the @mymath{i}-th element of the
-sorted input (with no blank values) and @mymath{m} and @mymath{\sigma} as
-the @mymath{\sigma}-clipped median and standard deviation from the
-distances of the previous @mymath{N/2-1} elements (not including
-@mymath{v_i}). If the value given to @option{--outliersigma} is displayed
-with @mymath{s}, the @mymath{i}-th element is considered as an outlier when
-the condition below is true.
+To identify the first outlier, we'll use the distribution of distances between 
sorted elements.
+If there are @mymath{N} successful tiles, for every tile, the distance between 
the adjacent @mymath{N/2} previous elements is found, giving a distribution 
containing @mymath{N/2-1} points.
+The @mymath{\sigma}-clipped median and standard deviation of this distribution 
is then found (@mymath{\sigma}-clipping is configured with 
@option{--outliersclip}).
+Finally, if the distance between the element and its previous element is more 
than @option{--outliersigma} multiples of the @mymath{\sigma}-clipped standard 
deviation added with the @mymath{\sigma}-clipped median, that element is 
considered an outlier and all tiles larger than that value are ignored.
+
+Formally, if we assume there are @mymath{N} elements.
+They are first sorted.
+Searching for the outlier starts on element @mymath{N/2} (integer division).
+Let's take @mymath{v_i} to be the @mymath{i}-th element of the sorted input 
(with no blank values) and @mymath{m} and @mymath{\sigma} as the 
@mymath{\sigma}-clipped median and standard deviation from the distances of the 
previous @mymath{N/2-1} elements (not including @mymath{v_i}).
+If the value given to @option{--outliersigma} is displayed with @mymath{s}, 
the @mymath{i}-th element is considered as an outlier when the condition below 
is true.
 
 @dispmath{{(v_i-v_{i-1})-m\over \sigma}>s}
 
-Since @mymath{i} begins from the median, the outlier has to be larger than
-the median. You can use the check images (for example @option{--checksky}
-in the Statistics program or @option{--checkqthresh},
-@option{--checkdetsky} and @option{--checksky} options in NoiseChisel for
-any of its steps that uses this outlier rejection) to inspect the steps and
-see which tiles have been discarded as outliers prior to interpolation.
+Since @mymath{i} begins from the median, the outlier has to be larger than the 
median.
+You can use the check images (for example @option{--checksky} in the 
Statistics program or @option{--checkqthresh}, @option{--checkdetsky} and 
@option{--checksky} options in NoiseChisel for any of its steps that uses this 
outlier rejection) to inspect the steps and see which tiles have been discarded 
as outliers prior to interpolation.
 
 
 
@@ -16471,9 +12143,8 @@ see which tiles have been discarded as outliers prior 
to interpolation.
 @node Invoking aststatistics,  , Sky value, Statistics
 @subsection Invoking Statistics
 
-Statistics will print statistical measures of an input dataset (table
-column or image). The executable name is @file{aststatistics} with the
-following general template
+Statistics will print statistical measures of an input dataset (table column 
or image).
+The executable name is @file{aststatistics} with the following general template
 
 @example
 $ aststatistics [OPTION ...] InputImage.fits
@@ -16510,94 +12181,66 @@ $ awk '($1+$2)/2 > 5 @{print $3@}' table.txt | 
aststatistics --median
 
 @noindent
 @cindex Standard input
-Statistics can take its input dataset either from a file (image or table)
-or the Standard input (see @ref{Standard input}). If any output file is to
-be created, the value to the @option{--output} option, is used as the base
-name for the generated files. Without @option{--output}, the input name
-will be used to generate an output name, see @ref{Automatic output}. The
-options described below are particular to Statistics, but for general
-operations, it shares a large collection of options with the other Gnuastro
-programs, see @ref{Common options} for the full list. For more on reading
-from standard input, please see the description of @code{--stdintimeout}
-option in @ref{Input output options}. Options can also be given in
-configuration files, for more, please see @ref{Configuration files}.
-
-The input dataset may have blank values (see @ref{Blank pixels}), in this
-case, all blank pixels are ignored during the calculation. Initially, the
-full dataset will be read, but it is possible to select a specific range of
-data elements to use in the analysis of each run. You can either directly
-specify a minimum and maximum value for the range of data elements to use
-(with @option{--greaterequal} or @option{--lessthan}), or specify the range
-using quantiles (with @option{--qrange}). If a range is specified, all
-pixels outside of it are ignored before any processing.
-
-The following set of options are for specifying the input/outputs of
-Statistics. There are many other input/output options that are common to
-all Gnuastro programs including Statistics, see @ref{Input output options}
-for those.
+Statistics can take its input dataset either from a file (image or table) or 
the Standard input (see @ref{Standard input}).
+If any output file is to be created, the value to the @option{--output} 
option, is used as the base name for the generated files.
+Without @option{--output}, the input name will be used to generate an output 
name, see @ref{Automatic output}.
+The options described below are particular to Statistics, but for general 
operations, it shares a large collection of options with the other Gnuastro 
programs, see @ref{Common options} for the full list.
+For more on reading from standard input, please see the description of 
@code{--stdintimeout} option in @ref{Input output options}.
+Options can also be given in configuration files, for more, please see 
@ref{Configuration files}.
+
+The input dataset may have blank values (see @ref{Blank pixels}), in this 
case, all blank pixels are ignored during the calculation.
+Initially, the full dataset will be read, but it is possible to select a 
specific range of data elements to use in the analysis of each run.
+You can either directly specify a minimum and maximum value for the range of 
data elements to use (with @option{--greaterequal} or @option{--lessthan}), or 
specify the range using quantiles (with @option{--qrange}).
+If a range is specified, all pixels outside of it are ignored before any 
processing.
+
+The following set of options are for specifying the input/outputs of 
Statistics.
+There are many other input/output options that are common to all Gnuastro 
programs including Statistics, see @ref{Input output options} for those.
 
 @table @option
 
 @item -c STR/INT
 @itemx --column=STR/INT
-The column to use when the input file is a table with more than one
-column. See @ref{Selecting table columns} for a full description of how to
-use this option. For more on how tables are read in Gnuastro, please see
-@ref{Tables}.
+The column to use when the input file is a table with more than one column.
+See @ref{Selecting table columns} for a full description of how to use this 
option.
+For more on how tables are read in Gnuastro, please see @ref{Tables}.
 
 @item -r STR/INT
 @itemx --refcol=STR/INT
-The reference column selector when the input file is a table. When a
-reference column is given, the range options below will be applied to this
-column and only elements in the input column that have a reference value in
-the correct range will be used. In practice this option allows you to
-select a subset of the input column based on values in another (the
-reference) column. All the statistical calculations will be done on the
-selected input column, not the reference column.
+The reference column selector when the input file is a table.
+When a reference column is given, the range options below will be applied to 
this column and only elements in the input column that have a reference value 
in the correct range will be used.
+In practice this option allows you to select a subset of the input column 
based on values in another (the reference) column.
+All the statistical calculations will be done on the selected input column, 
not the reference column.
 
 @item -g FLT
 @itemx --greaterequal=FLT
-Limit the range of inputs into those with values greater and equal to what
-is given to this option. None of the values below this value will be used
-in any of the processing steps below.
+Limit the range of inputs into those with values greater and equal to what is 
given to this option.
+None of the values below this value will be used in any of the processing 
steps below.
 
 @item -l FLT
 @itemx --lessthan=FLT
-Limit the range of inputs into those with values less-than what is given to
-this option. None of the values greater or equal to this value will be used
-in any of the processing steps below.
+Limit the range of inputs into those with values less-than what is given to 
this option.
+None of the values greater or equal to this value will be used in any of the 
processing steps below.
 
 @item -Q FLT[,FLT]
 @itemx --qrange=FLT[,FLT]
-Specify the range of usable inputs using the quantile. This option can take
-one or two quantiles to specify the range. When only one number is input
-(let's call it @mymath{Q}), the range will be those values in the quantile
-range @mymath{Q} to @mymath{1-Q}. So when only one value is given, it must
-be less than 0.5. When two values are given, the first is used as the lower
-quantile range and the second is used as the larger quantile range.
+Specify the range of usable inputs using the quantile.
+This option can take one or two quantiles to specify the range.
+When only one number is input (let's call it @mymath{Q}), the range will be 
those values in the quantile range @mymath{Q} to @mymath{1-Q}.
+So when only one value is given, it must be less than 0.5.
+When two values are given, the first is used as the lower quantile range and 
the second is used as the larger quantile range.
 
 @cindex Quantile
-The quantile of a given element in a dataset is defined by the fraction of
-its index to the total number of values in the sorted input array. So the
-smallest and largest values in the dataset have a quantile of 0.0 and
-1.0. The quantile is a very useful non-parametric (making no assumptions
-about the input) relative measure to specify a range. It can best be
-understood in terms of the cumulative frequency plot, see @ref{Histogram
-and Cumulative Frequency Plot}. The quantile of each horizontal axis value
-in the cumulative frequency plot is the vertical axis value associate with
-it.
+The quantile of a given element in a dataset is defined by the fraction of its 
index to the total number of values in the sorted input array.
+So the smallest and largest values in the dataset have a quantile of 0.0 and 
1.0.
+The quantile is a very useful non-parametric (making no assumptions about the 
input) relative measure to specify a range.
+It can best be understood in terms of the cumulative frequency plot, see 
@ref{Histogram and Cumulative Frequency Plot}.
+The quantile of each horizontal axis value in the cumulative frequency plot is 
the vertical axis value associate with it.
 
 @end table
 
 @cindex ASCII plot
-When no operation is requested, Statistics will print some general basic
-properties of the input dataset on the command-line like the example below
-(ran on one of the output images of @command{make check}@footnote{You can
-try it by running the command in the @file{tests} directory, open the image
-with a FITS viewer and have a look at it to get a sense of how these
-statistics relate to the input image/dataset.}). This default behavior is
-designed to help give you a general feeling of how the data are distributed
-and help in narrowing down your analysis.
+When no operation is requested, Statistics will print some general basic 
properties of the input dataset on the command-line like the example below (ran 
on one of the output images of @command{make check}@footnote{You can try it by 
running the command in the @file{tests} directory, open the image with a FITS 
viewer and have a look at it to get a sense of how these statistics relate to 
the input image/dataset.}).
+This default behavior is designed to help give you a general feeling of how 
the data are distributed and help in narrowing down your analysis.
 
 @example
 $ aststatistics convolve_spatial_scaled_noised.fits     \
@@ -16632,29 +12275,19 @@ Histogram:
  |-----------------------------------------------------------------
 @end example
 
-Gnuastro's Statistics is a very general purpose program, so to be able to
-easily understand this diversity in its operations (and how to possibly run
-them together), we'll divided the operations into two types: those that
-don't respect the position of the elements and those that do (by
-tessellating the input on a tile grid, see @ref{Tessellation}). The former
-treat the whole dataset as one and can re-arrange all the elements (for
-example sort them), but the former do their processing on each tile
-independently. First, we'll review the operations that work on the whole
-dataset.
+Gnuastro's Statistics is a very general purpose program, so to be able to 
easily understand this diversity in its operations (and how to possibly run 
them together), we'll divided the operations into two types: those that don't 
respect the position of the elements and those that do (by tessellating the 
input on a tile grid, see @ref{Tessellation}).
+The former treat the whole dataset as one and can re-arrange all the elements 
(for example sort them), but the former do their processing on each tile 
independently.
+First, we'll review the operations that work on the whole dataset.
 
 @cindex AWK
 @cindex GNU AWK
-The group of options below can be used to get single value measurement(s)
-of the whole dataset. They will print only the requested value as one field
-in a line/row, like the @option{--mean}, @option{--median} options. These
-options can be called any number of times and in any order. The outputs of
-all such options will be printed on one line following each other (with a
-space character between them). This feature makes these options very useful
-in scripts, or to redirect into programs like GNU AWK for higher-level
-processing. These are some of the most basic measures, Gnuastro is still
-under heavy development and this list will grow. If you want another
-statistical parameter, please contact us and we will do out best to add it
-to this list, see @ref{Suggest new feature}.
+The group of options below can be used to get single value measurement(s) of 
the whole dataset.
+They will print only the requested value as one field in a line/row, like the 
@option{--mean}, @option{--median} options.
+These options can be called any number of times and in any order.
+The outputs of all such options will be printed on one line following each 
other (with a space character between them).
+This feature makes these options very useful in scripts, or to redirect into 
programs like GNU AWK for higher-level processing.
+These are some of the most basic measures, Gnuastro is still under heavy 
development and this list will grow.
+If you want another statistical parameter, please contact us and we will do 
out best to add it to this list, see @ref{Suggest new feature}.
 
 @table @option
 
@@ -16685,62 +12318,46 @@ Print the median of all used elements.
 
 @item -u FLT[,FLT[,...]]
 @itemx --quantile=FLT[,FLT[,...]]
-Print the values at the given quantiles of the input dataset. Any number of
-quantiles may be given and one number will be printed for each. Values can
-either be written as a single number or as fractions, but must be between
-zero and one (inclusive). Hence, in effect @command{--quantile=0.25
---quantile=0.75} is equivalent to @option{--quantile=0.25,3/4}, or
-@option{-u1/4,3/4}.
-
-The returned value is one of the elements from the dataset. Taking
-@mymath{q} to be your desired quantile, and @mymath{N} to be the total
-number of used (non-blank and within the given range) elements, the
-returned value is at the following position in the sorted array:
-@mymath{round(q\times{}N}).
+Print the values at the given quantiles of the input dataset.
+Any number of quantiles may be given and one number will be printed for each.
+Values can either be written as a single number or as fractions, but must be 
between zero and one (inclusive).
+Hence, in effect @command{--quantile=0.25 --quantile=0.75} is equivalent to 
@option{--quantile=0.25,3/4}, or @option{-u1/4,3/4}.
+
+The returned value is one of the elements from the dataset.
+Taking @mymath{q} to be your desired quantile, and @mymath{N} to be the total 
number of used (non-blank and within the given range) elements, the returned 
value is at the following position in the sorted array: 
@mymath{round(q\times{}N}).
 
 @item --quantfunc=FLT[,FLT[,...]]
-Print the quantiles of the given values in the dataset. This option is the
-inverse of the @option{--quantile} and operates similarly except that the
-acceptable values are within the range of the dataset, not between 0 and
-1. Formally it is known as the ``Quantile function''.
+Print the quantiles of the given values in the dataset.
+This option is the inverse of the @option{--quantile} and operates similarly 
except that the acceptable values are within the range of the dataset, not 
between 0 and 1.
+Formally it is known as the ``Quantile function''.
 
-Since the dataset is not continuous this function will find the nearest
-element of the dataset and use its position to estimate the quantile
-function.
+Since the dataset is not continuous this function will find the nearest 
element of the dataset and use its position to estimate the quantile function.
 
 @item -O
 @itemx --mode
-Print the mode of all used elements. The mode is found through the mirror
-distribution which is fully described in Appendix C of
-@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa 2015}. See
-that section for a full description.
-
-This mode calculation algorithm is non-parametric, so when the dataset is
-not large enough (larger than about 1000 elements usually), or doesn't have
-a clear mode it can fail. In such cases, this option will return a value of
-@code{nan} (for the floating point NaN value).
-
-As described in that paper, the easiest way to assess the quality of this
-mode calculation method is to use it's symmetricity (see @option{--modesym}
-below). A better way would be to use the @option{--mirror} option to
-generate the histogram and cumulative frequency tables for any given mirror
-value (the mode in this case) as a table. If you generate plots like those
-shown in Figure 21 of that paper, then your mode is accurate.
+Print the mode of all used elements.
+The mode is found through the mirror distribution which is fully described in 
Appendix C of @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa 
2015}.
+See that section for a full description.
+
+This mode calculation algorithm is non-parametric, so when the dataset is not 
large enough (larger than about 1000 elements usually), or doesn't have a clear 
mode it can fail.
+In such cases, this option will return a value of @code{nan} (for the floating 
point NaN value).
+
+As described in that paper, the easiest way to assess the quality of this mode 
calculation method is to use it's symmetricity (see @option{--modesym} below).
+A better way would be to use the @option{--mirror} option to generate the 
histogram and cumulative frequency tables for any given mirror value (the mode 
in this case) as a table.
+If you generate plots like those shown in Figure 21 of that paper, then your 
mode is accurate.
 
 @item --modequant
-Print the quantile of the mode. You can get the actual mode value from the
-@option{--mode} described above. In many cases, the absolute value of the
-mode is irrelevant, but its position within the distribution is
-important. In such cases, this option will become handy.
+Print the quantile of the mode.
+You can get the actual mode value from the @option{--mode} described above.
+In many cases, the absolute value of the mode is irrelevant, but its position 
within the distribution is important.
+In such cases, this option will become handy.
 
 @item --modesym
-Print the symmetricity of the calculated mode. See the description of
-@option{--mode} for more. This mode algorithm finds the mode based on how
-symmetric it is, so if the symmetricity returned by this option is too low,
-the mode is not too accurate. See Appendix C of
-@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa 2015} for a
-full description. In practice, symmetricity values larger than 0.2 are
-mostly good.
+Print the symmetricity of the calculated mode.
+See the description of @option{--mode} for more.
+This mode algorithm finds the mode based on how symmetric it is, so if the 
symmetricity returned by this option is too low, the mode is not too accurate.
+See Appendix C of @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa 
2015} for a full description.
+In practice, symmetricity values larger than 0.2 are mostly good.
 
 @item --modesymvalue
 Print the value in the distribution where the mirror and input
@@ -16749,24 +12366,17 @@ of @url{https://arxiv.org/abs/1505.01664, Akhlaghi 
and Ichikawa 2015} for
 more.
 
 @item --sigclip-number
-Number of elements after applying @mymath{\sigma}-clipping (see @ref{Sigma
-clipping}). @mymath{\sigma}-clipping configuration is done with the
-@option{--sigclipparams} option.
+Number of elements after applying @mymath{\sigma}-clipping (see @ref{Sigma 
clipping}).
+@mymath{\sigma}-clipping configuration is done with the 
@option{--sigclipparams} option.
 
 @item --sigclip-median
-Median after applying @mymath{\sigma}-clipping (see @ref{Sigma
-clipping}). @mymath{\sigma}-clipping configuration is done with the
-@option{--sigclipparams} option.
+Median after applying @mymath{\sigma}-clipping (see @ref{Sigma clipping}).
+@mymath{\sigma}-clipping configuration is done with the 
@option{--sigclipparams} option.
 
 @cindex Outlier
-Here is one scenario where this can be useful: assume you have a table and
-you would like to remove the rows that are outliers (not within the
-@mymath{\sigma}-clipping range). Let's assume your table is called
-@file{table.fits} and you only want to keep the rows that have a value in
-@code{COLUMN} within the @mymath{\sigma}-clipped range (to
-@mymath{3\sigma}, with a tolerance of 0.1). This command will return the
-@mymath{\sigma}-clipped median and standard deviation (used to define the
-range later).
+Here is one scenario where this can be useful: assume you have a table and you 
would like to remove the rows that are outliers (not within the 
@mymath{\sigma}-clipping range).
+Let's assume your table is called @file{table.fits} and you only want to keep 
the rows that have a value in @code{COLUMN} within the @mymath{\sigma}-clipped 
range (to @mymath{3\sigma}, with a tolerance of 0.1).
+This command will return the @mymath{\sigma}-clipped median and standard 
deviation (used to define the range later).
 
 @example
 $ aststatistics table.fits -cCOLUMN --sclipparams=3,0.1 \
@@ -16774,18 +12384,12 @@ $ aststatistics table.fits -cCOLUMN 
--sclipparams=3,0.1 \
 @end example
 
 @cindex GNU AWK
-You can then use the @option{--range} option of Table (see @ref{Table}) to
-select the proper rows. But for that, you need the actual starting and
-ending values of the range (@mymath{m\pm s\sigma}; where @mymath{m} is the
-median and @mymath{s} is the multiple of sigma to define an
-outlier). Therefore, the raw outputs of Statistics in the command above
-aren't enough.
+You can then use the @option{--range} option of Table (see @ref{Table}) to 
select the proper rows.
+But for that, you need the actual starting and ending values of the range 
(@mymath{m\pm s\sigma}; where @mymath{m} is the median and @mymath{s} is the 
multiple of sigma to define an outlier).
+Therefore, the raw outputs of Statistics in the command above aren't enough.
 
-To get the starting and ending values of the non-outlier range (and put a
-`@key{,}' between them, ready to be used in @option{--range}), pipe the
-result into AWK. But in AWK, we'll also need the multiple of
-@mymath{\sigma}, so we'll define it as a shell variable (@code{s}) before
-calling Statistics (note how @code{$s} is used two times now):
+To get the starting and ending values of the non-outlier range (and put a 
`@key{,}' between them, ready to be used in @option{--range}), pipe the result 
into AWK.
+But in AWK, we'll also need the multiple of @mymath{\sigma}, so we'll define 
it as a shell variable (@code{s}) before calling Statistics (note how @code{$s} 
is used two times now):
 
 @example
 $ s=3
@@ -16794,9 +12398,8 @@ $ aststatistics table.fits -cCOLUMN 
--sclipparams=$s,0.1 \
      | awk '@{s='$s'; printf("%f,%f\n", $1-s*$2, $1+s*$2)@}'
 @end example
 
-To pass it onto Table, we'll need to keep the printed output from the
-command above in another shell variable (@code{r}), not print it. In Bash,
-can do this by putting the whole statement within a @code{$()}:
+To pass it onto Table, we'll need to keep the printed output from the command 
above in another shell variable (@code{r}), not print it.
+In Bash, can do this by putting the whole statement within a @code{$()}:
 
 @example
 $ s=3
@@ -16806,53 +12409,38 @@ $ r=$(aststatistics table.fits -cCOLUMN 
--sclipparams=$s,0.1 \
 $ echo $r      # Just to confirm.
 @end example
 
-Now you can use Table with the @option{--range} option to only print the
-rows that have a value in @code{COLUMN} within the desired range:
+Now you can use Table with the @option{--range} option to only print the rows 
that have a value in @code{COLUMN} within the desired range:
 
 @example
 $ asttable table.fits --range=COLUMN,$r
 @end example
 
-To save the resulting table (that is clean of outliers) in another file
-(for example named @file{cleaned.fits}, it can also have a @file{.txt}
-suffix), just add @option{--output=cleaned.fits} to the command above.
+To save the resulting table (that is clean of outliers) in another file (for 
example named @file{cleaned.fits}, it can also have a @file{.txt} suffix), just 
add @option{--output=cleaned.fits} to the command above.
 
 
 @item --sigclip-mean
-Mean after applying @mymath{\sigma}-clipping (see @ref{Sigma
-clipping}). @mymath{\sigma}-clipping configuration is done with the
-@option{--sigclipparams} option.
+Mean after applying @mymath{\sigma}-clipping (see @ref{Sigma clipping}).
+@mymath{\sigma}-clipping configuration is done with the 
@option{--sigclipparams} option.
 
 @item --sigclip-std
-Standard deviation after applying @mymath{\sigma}-clipping (see @ref{Sigma
-clipping}). @mymath{\sigma}-clipping configuration is done with the
-@option{--sigclipparams} option.
+Standard deviation after applying @mymath{\sigma}-clipping (see @ref{Sigma 
clipping}).
+@mymath{\sigma}-clipping configuration is done with the 
@option{--sigclipparams} option.
 
 @end table
 
-The list of options below are for those statistical operations that output
-more than one value. So while they can be called together in one run, their
-outputs will be distinct (each one's output will usually be printed in more
-than one line).
+The list of options below are for those statistical operations that output 
more than one value.
+So while they can be called together in one run, their outputs will be 
distinct (each one's output will usually be printed in more than one line).
 
 @table @option
 
 @item -A
 @itemx --asciihist
-Print an ASCII histogram of the usable values within the input dataset
-along with some basic information like the example below (from the UVUDF
-catalog@footnote{@url{https://asd.gsfc.nasa.gov/UVUDF/uvudf_rafelski_2015.fits.gz}}).
 The
-width and height of the histogram (in units of character widths and heights
-on your command-line terminal) can be set with the @option{--numasciibins}
-(for the width) and @option{--asciiheight} options.
-
-For a full description of the histogram, please see @ref{Histogram and
-Cumulative Frequency Plot}. An ASCII plot is certainly very crude and
-cannot be used in any publication, but it is very useful for getting a
-general feeling of the input dataset very fast and easily on the
-command-line without having to take your hands off the keyboard (which is a
-major distraction!). If you want to try it out, you can write it all in one
-line and ignore the @key{\} and extra spaces.
+Print an ASCII histogram of the usable values within the input dataset along 
with some basic information like the example below (from the UVUDF 
catalog@footnote{@url{https://asd.gsfc.nasa.gov/UVUDF/uvudf_rafelski_2015.fits.gz}}).
+The width and height of the histogram (in units of character widths and 
heights on your command-line terminal) can be set with the 
@option{--numasciibins} (for the width) and @option{--asciiheight} options.
+
+For a full description of the histogram, please see @ref{Histogram and 
Cumulative Frequency Plot}.
+An ASCII plot is certainly very crude and cannot be used in any publication, 
but it is very useful for getting a general feeling of the input dataset very 
fast and easily on the command-line without having to take your hands off the 
keyboard (which is a major distraction!).
+If you want to try it out, you can write it all in one line and ignore the 
@key{\} and extra spaces.
 
 @example
 $ aststatistics uvudf_rafelski_2015.fits.gz --hdu=1         \
@@ -16877,11 +12465,9 @@ X: (linear: 17.7735 -- 31.4679, in 55 bins)
 @end example
 
 @item --asciicfp
-Print the cumulative frequency plot of the usable elements in the input
-dataset. Please see descriptions under @option{--asciihist} for more, the
-example below is from the same input table as that example. To better
-understand the cumulative frequency plot, please see @ref{Histogram and
-Cumulative Frequency Plot}.
+Print the cumulative frequency plot of the usable elements in the input 
dataset.
+Please see descriptions under @option{--asciihist} for more, the example below 
is from the same input table as that example.
+To better understand the cumulative frequency plot, please see @ref{Histogram 
and Cumulative Frequency Plot}.
 
 @example
 $ aststatistics uvudf_rafelski_2015.fits.gz --hdu=1         \
@@ -16906,182 +12492,118 @@ X: (linear: 17.7735 -- 31.4679, in 55 bins)
 
 @item -H
 @itemx --histogram
-Save the histogram of the usable values in the input dataset into a
-table. The first column is the value at the center of the bin and the
-second is the number of points in that bin. If the @option{--cumulative}
-option is also called with this option in a run, then the table will have
-three columns (the third is the cumulative frequency plot). Through the
-@option{--numbins}, @option{--onebinstart}, or @option{--manualbinrange},
-you can modify the first column values and with @option{--normalize} and
-@option{--maxbinone} you can modify the second columns. See below for the
-description of each.
-
-By default (when no @option{--output} is specified) a plain text table will
-be created, see @ref{Gnuastro text table format}. If a FITS name is
-specified, you can use the common option @option{--tableformat} to have it
-as a FITS ASCII or FITS binary format, see @ref{Common options}. This table
-can then be fed into your favorite plotting tool and get a much more clean
-and nice histogram than what the raw command-line can offer you (with the
-@option{--asciihist} option).
+Save the histogram of the usable values in the input dataset into a table.
+The first column is the value at the center of the bin and the second is the 
number of points in that bin.
+If the @option{--cumulative} option is also called with this option in a run, 
then the table will have three columns (the third is the cumulative frequency 
plot).
+Through the @option{--numbins}, @option{--onebinstart}, or 
@option{--manualbinrange}, you can modify the first column values and with 
@option{--normalize} and @option{--maxbinone} you can modify the second columns.
+See below for the description of each.
+
+By default (when no @option{--output} is specified) a plain text table will be 
created, see @ref{Gnuastro text table format}.
+If a FITS name is specified, you can use the common option 
@option{--tableformat} to have it as a FITS ASCII or FITS binary format, see 
@ref{Common options}.
+This table can then be fed into your favorite plotting tool and get a much 
more clean and nice histogram than what the raw command-line can offer you 
(with the @option{--asciihist} option).
 
 @item -C
 @itemx --cumulative
-Save the cumulative frequency plot of the usable values in the input
-dataset into a table, similar to @option{--histogram}.
+Save the cumulative frequency plot of the usable values in the input dataset 
into a table, similar to @option{--histogram}.
 
 @item -s
 @itemx --sigmaclip
-Do @mymath{\sigma}-clipping on the usable pixels of the input dataset. See
-@ref{Sigma clipping} for a full description on @mymath{\sigma}-clipping and
-also to better understand this option. The @mymath{\sigma}-clipping
-parameters can be set through the @option{--sclipparams} option (see
-below).
+Do @mymath{\sigma}-clipping on the usable pixels of the input dataset.
+See @ref{Sigma clipping} for a full description on @mymath{\sigma}-clipping 
and also to better understand this option.
+The @mymath{\sigma}-clipping parameters can be set through the 
@option{--sclipparams} option (see below).
 
 @item --mirror=FLT
-Make a histogram and cumulative frequency plot of the mirror distribution
-for the given dataset when the mirror is located at the value to this
-option. The mirror distribution is fully described in Appendix C of
-@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa 2015} and
-currently it is only used to calculate the mode (see @option{--mode}).
-
-Just note that the mirror distribution is a discrete distribution like the
-input, so while you may give any number as the value to this option, the
-actual mirror value is the closest number in the input dataset to this
-value. If the two numbers are different, Statistics will warn you of the
-actual mirror value used.
-
-This option will make a table as output. Depending on your selected name
-for the output, it will be either a FITS table or a plain text table (which
-is the default). It contains three columns: the first is the center of the
-bins, the second is the histogram (with the largest value set to 1) and the
-third is the normalized cumulative frequency plot of the mirror
-distribution. The bins will be positioned such that the mode is on the
-starting interval of one of the bins to make it symmetric around the
-mirror. With this output file and the input histogram (that you can
-generate in another run of Statistics, using the @option{--onebinvalue}),
-it is possible to make plots like Figure 21 of
-@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa 2015}.
+Make a histogram and cumulative frequency plot of the mirror distribution for 
the given dataset when the mirror is located at the value to this option.
+The mirror distribution is fully described in Appendix C of 
@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa 2015} and 
currently it is only used to calculate the mode (see @option{--mode}).
+
+Just note that the mirror distribution is a discrete distribution like the 
input, so while you may give any number as the value to this option, the actual 
mirror value is the closest number in the input dataset to this value.
+If the two numbers are different, Statistics will warn you of the actual 
mirror value used.
+
+This option will make a table as output.
+Depending on your selected name for the output, it will be either a FITS table 
or a plain text table (which is the default).
+It contains three columns: the first is the center of the bins, the second is 
the histogram (with the largest value set to 1) and the third is the normalized 
cumulative frequency plot of the mirror distribution.
+The bins will be positioned such that the mode is on the starting interval of 
one of the bins to make it symmetric around the mirror.
+With this output file and the input histogram (that you can generate in 
another run of Statistics, using the @option{--onebinvalue}), it is possible to 
make plots like Figure 21 of @url{https://arxiv.org/abs/1505.01664, Akhlaghi 
and Ichikawa 2015}.
 
 @end table
 
-The list of options below allow customization of the histogram and
-cumulative frequency plots (for the @option{--histogram},
-@option{--cumulative}, @option{--asciihist}, and @option{--asciicfp}
-options).
+The list of options below allow customization of the histogram and cumulative 
frequency plots (for the @option{--histogram}, @option{--cumulative}, 
@option{--asciihist}, and @option{--asciicfp} options).
 
 @table @option
 
 @item --numbins
-The number of bins (rows) to use in the histogram and the cumulative
-frequency plot tables (outputs of @option{--histogram} and
-@option{--cumulative}).
+The number of bins (rows) to use in the histogram and the cumulative frequency 
plot tables (outputs of @option{--histogram} and @option{--cumulative}).
 
 @item --numasciibins
-The number of bins (characters) to use in the ASCII plots when printing the
-histogram and the cumulative frequency plot (outputs of
-@option{--asciihist} and @option{--asciicfp}).
+The number of bins (characters) to use in the ASCII plots when printing the 
histogram and the cumulative frequency plot (outputs of @option{--asciihist} 
and @option{--asciicfp}).
 
 @item --asciiheight
-The number of lines to use when printing the ASCII histogram and cumulative
-frequency plot on the command-line (outputs of @option{--asciihist} and
-@option{--asciicfp}).
+The number of lines to use when printing the ASCII histogram and cumulative 
frequency plot on the command-line (outputs of @option{--asciihist} and 
@option{--asciicfp}).
 
 @item -n
 @itemx --normalize
-Normalize the histogram or cumulative frequency plot tables (outputs of
-@option{--histogram} and @option{--cumulative}). For a histogram, the sum
-of all bins will become one and for a cumulative frequency plot the last
-bin value will be one.
+Normalize the histogram or cumulative frequency plot tables (outputs of 
@option{--histogram} and @option{--cumulative}).
+For a histogram, the sum of all bins will become one and for a cumulative 
frequency plot the last bin value will be one.
 
 @item --maxbinone
-Divide all the histogram values by the maximum bin value so it becomes one
-and the rest are similarly scaled. In some situations (for example if you
-want to plot the histogram and cumulative frequency plot in one plot) this
-can be very useful.
+Divide all the histogram values by the maximum bin value so it becomes one and 
the rest are similarly scaled.
+In some situations (for example if you want to plot the histogram and 
cumulative frequency plot in one plot) this can be very useful.
 
 @item --onebinstart=FLT
-Make sure that one bin starts with the value to this option. In practice,
-this will shift the bins used to find the histogram and cumulative
-frequency plot such that one bin's lower interval becomes this value.
+Make sure that one bin starts with the value to this option.
+In practice, this will shift the bins used to find the histogram and 
cumulative frequency plot such that one bin's lower interval becomes this value.
 
-For example when a histogram range includes negative and positive values
-and zero has a special significance in your analysis, then zero might fall
-somewhere in one bin. As a result that bin will have counts of positive and
-negative. By setting @option{--onebinstart=0}, you can make sure that one
-bin will only count negative values in the vicinity of zero and the next
-bin will only count positive ones in that vicinity.
+For example when a histogram range includes negative and positive values and 
zero has a special significance in your analysis, then zero might fall 
somewhere in one bin.
+As a result that bin will have counts of positive and negative.
+By setting @option{--onebinstart=0}, you can make sure that one bin will only 
count negative values in the vicinity of zero and the next bin will only count 
positive ones in that vicinity.
 
 @cindex NaN
-Note that by default, the first row of the histogram and cumulative
-frequency plot show the central values of each bin. So in the example above
-you will not see the 0.000 in the first column, you will see two symmetric
-values.
+Note that by default, the first row of the histogram and cumulative frequency 
plot show the central values of each bin.
+So in the example above you will not see the 0.000 in the first column, you 
will see two symmetric values.
 
-If the value is not within the usable input range, this option will be
-ignored. When it is, this option is the last operation before the bins are
-finalized, therefore it has a higher priority than options like
-@option{--manualbinrange}.
+If the value is not within the usable input range, this option will be ignored.
+When it is, this option is the last operation before the bins are finalized, 
therefore it has a higher priority than options like @option{--manualbinrange}.
 
 @item --manualbinrange
-Use the values given to the @option{--greaterequal} and @option{--lessthan}
-to define the range of all bin-based calculations like the histogram. This
-option itself doesn't take any value, but just tells the program to use the
-values of those two options instead of the minimum and maximum values of a
-plot. If any of the two options are not given, then the minimum or maximum
-will be used respectively. Therefore, if none of them are called calling
-this option is redundant.
+Use the values given to the @option{--greaterequal} and @option{--lessthan} to 
define the range of all bin-based calculations like the histogram.
+This option itself doesn't take any value, but just tells the program to use 
the values of those two options instead of the minimum and maximum values of a 
plot.
+If any of the two options are not given, then the minimum or maximum will be 
used respectively.
+Therefore, if none of them are called calling this option is redundant.
 
 The @option{--onebinstart} option has a higher priority than this option.
-In other words, @option{--onebinstart} takes effect after the range has
-been finalized and the initial bins have been defined, therefore it has the
-power to (possibly) shift the bins. If you want to manually set the range
-of the bins @emph{and} have one bin on a special value, it is thus better
-to avoid @option{--onebinstart}.
+In other words, @option{--onebinstart} takes effect after the range has been 
finalized and the initial bins have been defined, therefore it has the power to 
(possibly) shift the bins.
+If you want to manually set the range of the bins @emph{and} have one bin on a 
special value, it is thus better to avoid @option{--onebinstart}.
 
 @end table
 
 
-All the options described until now were from the first class of operations
-discussed above: those that treat the whole dataset as one. However. It
-often happens that the relative position of the dataset elements over the
-dataset is significant. For example you don't want one median value for the
-whole input image, you want to know how the median changes over the
-image. For such operations, the input has to be tessellated (see
-@ref{Tessellation}). Thus this class of options can't currently be called
-along with the options above in one run of Statistics.
+All the options described until now were from the first class of operations 
discussed above: those that treat the whole dataset as one.
+However, it often happens that the relative position of the dataset elements 
over the dataset is significant.
+For example you don't want one median value for the whole input image, you 
want to know how the median changes over the image.
+For such operations, the input has to be tessellated (see @ref{Tessellation}).
+Thus this class of options can't currently be called along with the options 
above in one run of Statistics.
 
 @table @option
 
 @item -t
 @itemx --ontile
-Do the respective single-valued calculation over one tile of the input
-dataset, not the whole dataset. This option must be called with at least one
-of the single valued options discussed above (for example @option{--mean}
-or @option{--quantile}). The output will be a file in the same format as
-the input. If the @option{--oneelempertile} option is called, then one
-element/pixel will be used for each tile (see @ref{Processing
-options}). Otherwise, the output will have the same size as the input, but
-each element will have the value corresponding to that tile's value. If
-multiple single valued operations are called, then for each operation there
-will be one extension in the output FITS file.
+Do the respective single-valued calculation over one tile of the input 
dataset, not the whole dataset.
+This option must be called with at least one of the single valued options 
discussed above (for example @option{--mean} or @option{--quantile}).
+The output will be a file in the same format as the input.
+If the @option{--oneelempertile} option is called, then one element/pixel will 
be used for each tile (see @ref{Processing options}).
+Otherwise, the output will have the same size as the input, but each element 
will have the value corresponding to that tile's value.
+If multiple single valued operations are called, then for each operation there 
will be one extension in the output FITS file.
 
 @item -y
 @itemx --sky
-Estimate the Sky value on each tile as fully described in @ref{Quantifying
-signal in a tile}. As described in that section, several options are
-necessary to configure the Sky estimation which are listed below. The
-output file will have two extensions: the first is the Sky value and the
-second is the Sky standard deviation on each tile. Similar to
-@option{--ontile}, if the @option{--oneelempertile} option is called, then
-one element/pixel will be used for each tile (see @ref{Processing
-options}).
+Estimate the Sky value on each tile as fully described in @ref{Quantifying 
signal in a tile}.
+As described in that section, several options are necessary to configure the 
Sky estimation which are listed below.
+The output file will have two extensions: the first is the Sky value and the 
second is the Sky standard deviation on each tile.
+Similar to @option{--ontile}, if the @option{--oneelempertile} option is 
called, then one element/pixel will be used for each tile (see @ref{Processing 
options}).
 
 @end table
 
-The parameters for estimating the sky value can be set with the following
-options, except for the @option{--sclipparams} option (which is also used
-by the @option{--sigmaclip}), the rest are only used for the Sky value
-estimation.
+The parameters for estimating the sky value can be set with the following 
options, except for the @option{--sclipparams} option (which is also used by 
the @option{--sigmaclip}), the rest are only used for the Sky value estimation.
 
 @table @option
 
@@ -17095,69 +12617,50 @@ Kernel HDU to help in estimating the significance of 
signal in a tile, see
 @ref{Quantifying signal in a tile}.
 
 @item --meanmedqdiff=FLT
-The maximum acceptable distance between the quantiles of the mean and
-median, see @ref{Quantifying signal in a tile}. The initial Sky and its
-standard deviation estimates are measured on tiles where the quantiles of
-their mean and median are less distant than the value given to this
-option. For example @option{--meanmedqdiff=0.01} means that only tiles
-where the mean's quantile is between 0.49 and 0.51 (recall that the
-median's quantile is 0.5) will be used.
+The maximum acceptable distance between the quantiles of the mean and median, 
see @ref{Quantifying signal in a tile}.
+The initial Sky and its standard deviation estimates are measured on tiles 
where the quantiles of their mean and median are less distant than the value 
given to this option.
+For example @option{--meanmedqdiff=0.01} means that only tiles where the 
mean's quantile is between 0.49 and 0.51 (recall that the median's quantile is 
0.5) will be used.
 
 @item --sclipparams=FLT,FLT
-The @mymath{\sigma}-clipping parameters, see @ref{Sigma clipping}. This
-option takes two values which are separated by a comma (@key{,}). Each
-value can either be written as a single number or as a fraction of two
-numbers (for example @code{3,1/10}). The first value to this option is the
-multiple of @mymath{\sigma} that will be clipped (@mymath{\alpha} in that
-section). The second value is the exit criteria. If it is less than 1, then
-it is interpreted as tolerance and if it is larger than one it is a
-specific number. Hence, in the latter case the value must be an integer.
+The @mymath{\sigma}-clipping parameters, see @ref{Sigma clipping}.
+This option takes two values which are separated by a comma (@key{,}).
+Each value can either be written as a single number or as a fraction of two 
numbers (for example @code{3,1/10}).
+The first value to this option is the multiple of @mymath{\sigma} that will be 
clipped (@mymath{\alpha} in that section).
+The second value is the exit criteria.
+If it is less than 1, then it is interpreted as tolerance and if it is larger 
than one it is a specific number.
+Hence, in the latter case the value must be an integer.
 
 @item --outliersclip=FLT,FLT
 @mymath{\sigma}-clipping parameters for the outlier rejection of the Sky
 value (similar to @option{--sclipparams}).
 
-Outlier rejection is useful when the dataset contains a large and diffuse
-(almost flat within each tile) signal. The flatness of the profile will
-cause it to successfully pass the mean-median quantile difference test, so
-we'll need to use the distribution of successful tiles for removing these
-false positive. For more, see the latter half of @ref{Quantifying signal in
-a tile}.
+Outlier rejection is useful when the dataset contains a large and diffuse 
(almost flat within each tile) signal.
+The flatness of the profile will cause it to successfully pass the mean-median 
quantile difference test, so we'll need to use the distribution of successful 
tiles for removing these false positive.
+For more, see the latter half of @ref{Quantifying signal in a tile}.
 
 @item --outliersigma=FLT
-Multiple of sigma to define an outlier in the Sky value estimation. If this
-option is given a value of zero, no outlier rejection will take place. For
-more see @option{--outliersclip} and the latter half of @ref{Quantifying
-signal in a tile}.
+Multiple of sigma to define an outlier in the Sky value estimation.
+If this option is given a value of zero, no outlier rejection will take place.
+For more see @option{--outliersclip} and the latter half of @ref{Quantifying 
signal in a tile}.
 
 @item --smoothwidth=INT
-Width of a flat kernel to convolve the interpolated tile values. Tile
-interpolation is done using the median of the @option{--interpnumngb}
-neighbors of each tile (see @ref{Processing options}). If this option is
-given a value of zero or one, no smoothing will be done. Without smoothing,
-strong boundaries will probably be created between the values estimated for
-each tile. It is thus good to smooth the interpolated image so strong
-discontinuities do not show up in the final Sky values. The smoothing is
-done through convolution (see @ref{Convolution process}) with a flat
-kernel, so the value to this option must be an odd number.
+Width of a flat kernel to convolve the interpolated tile values.
+Tile interpolation is done using the median of the @option{--interpnumngb} 
neighbors of each tile (see @ref{Processing options}).
+If this option is given a value of zero or one, no smoothing will be done.
+Without smoothing, strong boundaries will probably be created between the 
values estimated for each tile.
+It is thus good to smooth the interpolated image so strong discontinuities do 
not show up in the final Sky values.
+The smoothing is done through convolution (see @ref{Convolution process}) with 
a flat kernel, so the value to this option must be an odd number.
 
 @item --ignoreblankintiles
-Don't set the input's blank pixels to blank in the tiled outputs (for
-example Sky and Sky standard deviation extensions of the output). This is
-only applicable when the tiled output has the same size as the input, in
-other words, when @option{--oneelempertile} isn't called.
+Don't set the input's blank pixels to blank in the tiled outputs (for example 
Sky and Sky standard deviation extensions of the output).
+This is only applicable when the tiled output has the same size as the input, 
in other words, when @option{--oneelempertile} isn't called.
 
-By default, blank values in the input (commonly on the edges which are
-outside the survey/field area) will be set to blank in the tiled outputs
-also. But in other scenarios this default behavior is not desired: for
-example if you have masked something in the input, but want the tiled
-output under that also.
+By default, blank values in the input (commonly on the edges which are outside 
the survey/field area) will be set to blank in the tiled outputs also.
+But in other scenarios this default behavior is not desired: for example if 
you have masked something in the input, but want the tiled output under that 
also.
 
 @item --checksky
-Create a multi-extension FITS file showing the steps that were used to
-estimate the Sky value over the input, see @ref{Quantifying signal in a
-tile}. The file will have two extensions for each step (one for the Sky and
-one for the Sky standard deviation).
+Create a multi-extension FITS file showing the steps that were used to 
estimate the Sky value over the input, see @ref{Quantifying signal in a tile}.
+The file will have two extensions for each step (one for the Sky and one for 
the Sky standard deviation).
 
 @end table
 
@@ -17167,131 +12670,75 @@ one for the Sky standard deviation).
 @cindex Labeling
 @cindex Detection
 @cindex Segmentation
-Once instrumental signatures are removed from the raw data (image) in the
-initial reduction process (see @ref{Data manipulation}). You are naturally
-eager to start answering the scientific questions that motivated the data
-collection in the first place. However, the raw dataset/image is just an
-array of values/pixels, that is all! These raw values cannot directly be
-used to answer your scientific questions: for example ``how many galaxies
-are there in the image?''.
-
-The first high-level step in your analysis of your targets will thus be to
-classify, or label, the dataset elements (pixels) into two classes: 1)
-noise, where random effects are the major contributor to the value, and 2)
-signal, where non-random factors (for example light from a distant galaxy)
-are present. This classification of the elements in a dataset is formally
-known as @emph{detection}.
-
-In an observational/experimental dataset, signal is always buried in noise:
-only mock/simulated datasets are free of noise. Therefore detection, or the
-process of separating signal from noise, determines the number of objects
-you study and the accuracy of any higher-level measurement you do on
-them. Detection is thus the most important step of any analysis and is not
-trivial. In particular, the most scientifically interesting astronomical
-targets are faint, can have a large variety of morphologies, along with a
-large distribution in brightness and size. Therefore when noise is
-significant, proper detection of your targets is a uniquely decisive step
-in your final scientific analysis/result.
+Once instrumental signatures are removed from the raw data (image) in the 
initial reduction process (see @ref{Data manipulation}).
+You are naturally eager to start answering the scientific questions that 
motivated the data collection in the first place.
+However, the raw dataset/image is just an array of values/pixels, that is all! 
These raw values cannot directly be used to answer your scientific questions: 
for example ``how many galaxies are there in the image?''.
+
+The first high-level step in your analysis of your targets will thus be to 
classify, or label, the dataset elements (pixels) into two classes: 1) noise, 
where random effects are the major contributor to the value, and 2) signal, 
where non-random factors (for example light from a distant galaxy) are present.
+This classification of the elements in a dataset is formally known as 
@emph{detection}.
+
+In an observational/experimental dataset, signal is always buried in noise: 
only mock/simulated datasets are free of noise.
+Therefore detection, or the process of separating signal from noise, 
determines the number of objects you study and the accuracy of any higher-level 
measurement you do on them.
+Detection is thus the most important step of any analysis and is not trivial.
+In particular, the most scientifically interesting astronomical targets are 
faint, can have a large variety of morphologies, along with a large 
distribution in brightness and size.
+Therefore when noise is significant, proper detection of your targets is a 
uniquely decisive step in your final scientific analysis/result.
 
 @cindex Erosion
-NoiseChisel is Gnuastro's program for detection of targets that don't have
-a sharp border (almost all astronomical objects). When the targets have a
-sharp edges/border (for example cells in biological imaging), a simple
-threshold is enough to separate them from noise and each other (if they are
-not touching). To detect such sharp-edged targets, you can use Gnuastro's
-Arithmetic program in a command like below (assuming the threshold is
-@code{100}, see @ref{Arithmetic}):
+NoiseChisel is Gnuastro's program for detection of targets that don't have a 
sharp border (almost all astronomical objects).
+When the targets have a sharp edges/border (for example cells in biological 
imaging), a simple threshold is enough to separate them from noise and each 
other (if they are not touching).
+To detect such sharp-edged targets, you can use Gnuastro's Arithmetic program 
in a command like below (assuming the threshold is @code{100}, see 
@ref{Arithmetic}):
 
 @example
 $ astarithmetic in.fits 100 gt 2 connected-components
 @end example
 
-Since almost no astronomical target has such sharp edges, we need a more
-advanced detection methodology. NoiseChisel uses a new noise-based paradigm
-for detection of very extended and diffuse targets that are drowned deeply
-in the ocean of noise. It was initially introduced in
-@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}. The
-name of NoiseChisel is derived from the first thing it does after
-thresholding the dataset: to erode it. In mathematical morphology, erosion
-on pixels can be pictured as carving-off boundary pixels. Hence, what
-NoiseChisel does is similar to what a wood chisel or stone chisel do. It is
-just not a hardware, but a software. In fact, looking at it as a chisel and
-your dataset as a solid cube of rock will greatly help in effectively
-understanding and optimally using it: with NoiseChisel you literally carve
-your targets out of the noise. Try running it with the
-@option{--checkdetection} option to see each step of the carving process on
-your input dataset.
+Since almost no astronomical target has such sharp edges, we need a more 
advanced detection methodology.
+NoiseChisel uses a new noise-based paradigm for detection of very extended and 
diffuse targets that are drowned deeply in the ocean of noise.
+It was initially introduced in @url{https://arxiv.org/abs/1505.01664, Akhlaghi 
and Ichikawa [2015]}.
+The name of NoiseChisel is derived from the first thing it does after 
thresholding the dataset: to erode it.
+In mathematical morphology, erosion on pixels can be pictured as carving-off 
boundary pixels.
+Hence, what NoiseChisel does is similar to what a wood chisel or stone chisel 
do.
+It is just not a hardware, but a software.
+In fact, looking at it as a chisel and your dataset as a solid cube of rock 
will greatly help in effectively understanding and optimally using it: with 
NoiseChisel you literally carve your targets out of the noise.
+Try running it with the @option{--checkdetection} option to see each step of 
the carving process on your input dataset.
 
 @cindex Segmentation
-NoiseChisel's primary output is a binary detection map with the same size
-as the input but only with two values: 0 and 1. Pixels that don't harbor
-any detected signal (noise) are given a label (or value) of zero and those
-with a value of 1 have been identified as hosting signal.
-
-Segmentation is the process of classifying the signal into higher-level
-constructs. For example if you have two separate galaxies in one image, by
-default NoiseChisel will give a value of 1 to the pixels of both, but after
-segmentation, the pixels in each will get separate labels. NoiseChisel is
-only focused on detection (separating signal from noise), to @emph{segment}
-the signal (into separate galaxies for example), Gnuastro has a separate
-specialized program @ref{Segment}. NoiseChisel's output can be
-directly/readily fed into Segment.
-
-For more on NoiseChisel's output format and its benefits (especially in
-conjunction with @ref{Segment} and later @ref{MakeCatalog}), please see
-@url{https://arxiv.org/abs/1611.06387, Akhlaghi [2016]}. Just note that
-when that paper was published, Segment was not yet spun-off into a separate
-program, and NoiseChisel done both detection and segmentation.
-
-NoiseChisel's output is designed to be generic enough to be easily used in
-any higher-level analysis. If your targets are not touching after running
-NoiseChisel and you aren't interested in their sub-structure, you don't
-need the Segment program at all. You can ask NoiseChisel to find the
-connected pixels in the output with the @option{--label} option. In this
-case, the output won't be a binary image any more, the signal will have
-counters/labels starting from 1 for each connected group of pixels. You can
-then directly feed NoiseChisel's output into MakeCatalog for measurements
-over the detections and the production of a catalog (see
-@ref{MakeCatalog}).
-
-Thanks to the published papers mentioned above, there is no need to provide
-a more complete introduction to NoiseChisel in this book. However,
-published papers cannot be updated any more, but the software has
-evolved/changed. The changes since publication are documented in
-@ref{NoiseChisel changes after publication}. Afterwards, in @ref{Invoking
-astnoisechisel}, the details of running NoiseChisel and its options are
-discussed.
-
-As discussed above, detection is one of the most important steps for your
-scientific result. It is therefore very important to obtain a good
-understanding of NoiseChisel (and afterwards @ref{Segment} and
-@ref{MakeCatalog}). We thus strongly recommend that after reading the
-papers above and the respective sections of Gnuastro's book, you play a
-little with the settings (in the order presented in the paper and
-@ref{Invoking astnoisechisel}) on a dataset you are familiar with and
-inspect all the check images (options starting with @option{--check}) to
-see the effect of each parameter.
-
-We strongly recommend going over the two tutorials of @ref{General program
-usage tutorial} and @ref{Detecting large extended targets}. They are
-designed to show how to most effectively use NoiseChisel for the detection
-of small faint objects and large extended objects. In the meantime, they
-will show you the modular principle behind Gnuastro's programs and how they
-are built to complement, and build upon, each other. @ref{General program
-usage tutorial} culminates in using NoiseChisel to detect galaxies and use
-its outputs to find the galaxy colors. Defining colors is a very common
-process in most science-cases. Therefore it is also recommended to
-(patiently) complete that tutorial for optimal usage of NoiseChisel in
-conjunction with all the other Gnuastro programs. @ref{Detecting large
-extended targets} shows you can optimize NoiseChisel's settings for very
-extended objects to successfully carve out to signal-to-noise ratio levels
-of below 1/10.
-
-In @ref{NoiseChisel changes after publication}, we'll review the changes in
-NoiseChisel since the publication of @url{https://arxiv.org/abs/1505.01664,
-Akhlaghi and Ichikawa [2015]}. We will then review NoiseChisel's input,
-detection, and output options in @ref{NoiseChisel input}, @ref{Detection
-options}, and @ref{NoiseChisel output}.
+NoiseChisel's primary output is a binary detection map with the same size as 
the input but only with two values: 0 and 1.
+Pixels that don't harbor any detected signal (noise) are given a label (or 
value) of zero and those with a value of 1 have been identified as hosting 
signal.
+
+Segmentation is the process of classifying the signal into higher-level 
constructs.
+For example if you have two separate galaxies in one image, by default 
NoiseChisel will give a value of 1 to the pixels of both, but after 
segmentation, the pixels in each will get separate labels.
+NoiseChisel is only focused on detection (separating signal from noise), to 
@emph{segment} the signal (into separate galaxies for example), Gnuastro has a 
separate specialized program @ref{Segment}.
+NoiseChisel's output can be directly/readily fed into Segment.
+
+For more on NoiseChisel's output format and its benefits (especially in 
conjunction with @ref{Segment} and later @ref{MakeCatalog}), please see 
@url{https://arxiv.org/abs/1611.06387, Akhlaghi [2016]}.
+Just note that when that paper was published, Segment was not yet spun-off 
into a separate program, and NoiseChisel done both detection and segmentation.
+
+NoiseChisel's output is designed to be generic enough to be easily used in any 
higher-level analysis.
+If your targets are not touching after running NoiseChisel and you aren't 
interested in their sub-structure, you don't need the Segment program at all.
+You can ask NoiseChisel to find the connected pixels in the output with the 
@option{--label} option.
+In this case, the output won't be a binary image any more, the signal will 
have counters/labels starting from 1 for each connected group of pixels.
+You can then directly feed NoiseChisel's output into MakeCatalog for 
measurements over the detections and the production of a catalog (see 
@ref{MakeCatalog}).
+
+Thanks to the published papers mentioned above, there is no need to provide a 
more complete introduction to NoiseChisel in this book.
+However, published papers cannot be updated any more, but the software has 
evolved/changed.
+The changes since publication are documented in @ref{NoiseChisel changes after 
publication}.
+Afterwards, in @ref{Invoking astnoisechisel}, the details of running 
NoiseChisel and its options are discussed.
+
+As discussed above, detection is one of the most important steps for your 
scientific result.
+It is therefore very important to obtain a good understanding of NoiseChisel 
(and afterwards @ref{Segment} and @ref{MakeCatalog}).
+We thus strongly recommend that after reading the papers above and the 
respective sections of Gnuastro's book, you play a little with the settings (in 
the order presented in the paper and @ref{Invoking astnoisechisel}) on a 
dataset you are familiar with and inspect all the check images (options 
starting with @option{--check}) to see the effect of each parameter.
+
+We strongly recommend going over the two tutorials of @ref{General program 
usage tutorial} and @ref{Detecting large extended targets}.
+They are designed to show how to most effectively use NoiseChisel for the 
detection of small faint objects and large extended objects.
+In the meantime, they will show you the modular principle behind Gnuastro's 
programs and how they are built to complement, and build upon, each other.
+@ref{General program usage tutorial} culminates in using NoiseChisel to detect 
galaxies and use its outputs to find the galaxy colors.
+Defining colors is a very common process in most science-cases.
+Therefore it is also recommended to (patiently) complete that tutorial for 
optimal usage of NoiseChisel in conjunction with all the other Gnuastro 
programs.
+@ref{Detecting large extended targets} shows you can optimize NoiseChisel's 
settings for very extended objects to successfully carve out to signal-to-noise 
ratio levels of below 1/10.
+
+In @ref{NoiseChisel changes after publication}, we'll review the changes in 
NoiseChisel since the publication of @url{https://arxiv.org/abs/1505.01664, 
Akhlaghi and Ichikawa [2015]}.
+We will then review NoiseChisel's input, detection, and output options in 
@ref{NoiseChisel input}, @ref{Detection options}, and @ref{NoiseChisel output}.
 
 @menu
 * NoiseChisel changes after publication::  NoiseChisel updates after paper's 
publication.
@@ -17301,51 +12748,34 @@ options}, and @ref{NoiseChisel output}.
 @node NoiseChisel changes after publication, Invoking astnoisechisel, 
NoiseChisel, NoiseChisel
 @subsection NoiseChisel changes after publication
 
-NoiseChisel was initially introduced in
-@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}. It is
-thus strongly recommended to read this paper for a good understanding of
-what it does and how each parameter influences the output. To help in
-understanding how it works, that paper has a large number of figures
-showing every step on multiple mock and real examples.
-
-However, the paper cannot be updated anymore, but NoiseChisel has evolved
-(and will continue to do so): better algorithms or steps have been found,
-thus options will be added or removed. This book is thus the final and
-definitive guide to NoiseChisel. The aim of this section is to make the
-transition from the paper to the installed version on your system, as
-smooth as possible with the list below. For a more detailed list of changes
-in previous Gnuastro releases/versions, please see the @file{NEWS}
-file@footnote{The @file{NEWS} file is present in the released Gnuastro
-tarball, see @ref{Release tarball}.}.
-
-The most important change since the publication of that paper is that from
-Gnuastro 0.6, NoiseChisel is only in charge on detection. Segmentation of
-the detected signal was spun-off into a separate program:
-@ref{Segment}. This spin-off allows much greater creativity and is in the
-spirit of Gnuastro's modular design (see @ref{Program design philosophy}).
-Below you can see the major changes since that paper was published. First,
-the removed options/features are discussed, then we review the new features
-that have been added.
+NoiseChisel was initially introduced in @url{https://arxiv.org/abs/1505.01664, 
Akhlaghi and Ichikawa [2015]}.
+It is thus strongly recommended to read this paper for a good understanding of 
what it does and how each parameter influences the output.
+To help in understanding how it works, that paper has a large number of 
figures showing every step on multiple mock and real examples.
+
+However, the paper cannot be updated anymore, but NoiseChisel has evolved (and 
will continue to do so): better algorithms or steps have been found, thus 
options will be added or removed.
+This book is thus the final and definitive guide to NoiseChisel.
+The aim of this section is to make the transition from the paper to the 
installed version on your system, as smooth as possible with the list below.
+For a more detailed list of changes in previous Gnuastro releases/versions, 
please see the @file{NEWS} file@footnote{The @file{NEWS} file is present in the 
released Gnuastro tarball, see @ref{Release tarball}.}.
+
+The most important change since the publication of that paper is that from 
Gnuastro 0.6, NoiseChisel is only in charge on detection.
+Segmentation of the detected signal was spun-off into a separate program: 
@ref{Segment}.
+This spin-off allows much greater creativity and is in the spirit of 
Gnuastro's modular design (see @ref{Program design philosophy}).
+Below you can see the major changes since that paper was published.
+First, the removed options/features are discussed, then we review the new 
features that have been added.
 
 @noindent
 Removed features/options:
 @itemize
 
 @item
-@option{--skysubtracted}: This option was used to account for the extra
-noise that is added if the Sky value has already been subtracted. However,
-NoiseChisel estimates the Sky standard deviation based on the input data,
-not exposure maps or other theoretical considerations. Therefore the
-standard deviation of the undetected pixels also contains the errors due to
-any previous sky subtraction. This option is therefore no longer present in
-NoiseChisel.
-
-@option{--dilate}: In the paper, true detections were dilated for a final
-dig into the noise. However, the simple 8-connected dilation produced boxy
-results which were not realistic and could miss diffuse flux. The final dig
-into the noise is now done by ``grow''ing the true detections, similar to
-how true clumps were grown, see the description of @option{--detgrowquant}
-below and in @ref{Detection options} for more on the new alternative.
+@option{--skysubtracted}: This option was used to account for the extra noise 
that is added if the Sky value has already been subtracted.
+However, NoiseChisel estimates the Sky standard deviation based on the input 
data, not exposure maps or other theoretical considerations.
+Therefore the standard deviation of the undetected pixels also contains the 
errors due to any previous sky subtraction.
+This option is therefore no longer present in NoiseChisel.
+
+@option{--dilate}: In the paper, true detections were dilated for a final dig 
into the noise.
+However, the simple 8-connected dilation produced boxy results which were not 
realistic and could miss diffuse flux.
+The final dig into the noise is now done by ``grow''ing the true detections, 
similar to how true clumps were grown, see the description of 
@option{--detgrowquant} below and in @ref{Detection options} for more on the 
new alternative.
 
 @item
 Segmentation has been completely moved to a new program: @ref{Segment}.
@@ -17356,104 +12786,68 @@ Added features/options:
 @itemize
 
 @item
-@option{--widekernel}: NoiseChisel uses the difference between the mode and
-median to identify if a tile should be used for estimating the quantile
-thresholds (see @ref{Quantifying signal in a tile}). Until now, NoiseChisel
-would convolve an image once and estimate the proper tiles for quantile
-estimations on the convolved image. The same convolved image would later be
-used for quantile estimation. A larger kernel does increase the skewness
-(and thus difference between the mode and median, therefore helps in
-detecting the presence signal), however, it disfigures the
-shapes/morphology of the objects.
-
-This new @option{--widekernel} option (and a corresponding @option{--wkhdu}
-option to specify its HDU) option are added to solve such cases. When its
-given, the input will be convolved with both the sharp (given through the
-@option{--kernel} option) and wide kernels. The mode and median are
-calculated on the dataset that is convolved with the wider kernel, then the
-quantiles are estimated on the image convolved with the sharper kernel.
+The quantile difference to identify tiles with no significant signal is 
measured between the @emph{mean} and median.
+In the published paper, it was between the @emph{mode} and median.
+The quantile of the mean is more sensitive to skewness (the presence of 
signal), so it is preferable to the quantile of the mode.
+For more see @ref{Quantifying signal in a tile}.
 
 @item
-The quantile difference to identify tiles with no significant signal is
-measured between the @emph{mean} and median. In the published paper, it was
-between the @emph{mode} and median. The quantile of the mean is more
-sensitive to skewness (the presence of signal), so it is preferable to the
-quantile of the mode. For more see @ref{Quantifying signal in a tile}.
+@option{--widekernel}: NoiseChisel uses the difference between the mean and 
median to identify if a tile should be used for estimating the quantile 
thresholds (see @ref{Quantifying signal in a tile}).
+Until now, NoiseChisel would convolve an image once and estimate the proper 
tiles for quantile estimations on the convolved image.
+The same convolved image would later be used for quantile estimation.
+A larger kernel does increase the skewness (and thus difference between the 
mean and median, therefore helps in detecting the presence signal), however, it 
disfigures the shapes/morphology of the objects.
+
+This new @option{--widekernel} option (and a corresponding @option{--wkhdu} 
option to specify its HDU) is added to solve such cases.
+When its given, the input will be convolved with both the sharp (given through 
the @option{--kernel} option) and wide kernels.
+The mean and median are calculated on the dataset that is convolved with the 
wider kernel, then the quantiles are estimated on the image convolved with the 
sharper kernel.
 
 @item
-Outlier rejection in quantile thresholds: When there are large galaxies or
-bright stars in the image, their gradient may be on a smaller scale than
-the selected tile size. In such cases, those tiles will be identified as
-tiles with no signal and thus preserved. An outlier identification
-algorithm has been added to NoiseChisel and can be configured with the
-following options: @option{--outliersigma} and @option{--outliersclip}. For
-a more complete description, see the latter half of @ref{Quantifying signal
-in a tile}.
+Outlier rejection in quantile thresholds: When there are large galaxies or 
bright stars in the image, their gradient may be on a smaller scale than the 
selected tile size.
+In such cases, those tiles will be identified as tiles with no signal and thus 
preserved.
+An outlier identification algorithm has been added to NoiseChisel and can be 
configured with the following options: @option{--outliersigma} and 
@option{--outliersclip}.
+For a more complete description, see the latter half of @ref{Quantifying 
signal in a tile}.
 
 @item
-@option{--blankasforeground}: allows blank pixels to be treated as
-foreground in NoiseChisel's binary operations: the initial erosion
-(@option{--erode}) and opening (@option{--open}) as well as the filling
-holes and opening step for defining pseudo-detections
-(@option{--dthresh}). In the published paper, blank pixels were treated as
-foreground by default. To avoid too many false positive near blank/masked
-regions, blank pixels are now considered to be in the background. This
-option will create the old behavior.
+@option{--blankasforeground}: allows blank pixels to be treated as foreground 
in NoiseChisel's binary operations: the initial erosion (@option{--erode}) and 
opening (@option{--open}) as well as the filling holes and opening step for 
defining pseudo-detections (@option{--dthresh}).
+In the published paper, blank pixels were treated as foreground by default.
+To avoid too many false positive near blank/masked regions, blank pixels are 
now considered to be in the background.
+This option will create the old behavior.
 
 @item
-@option{--skyfracnoblank}: To reduce the bias caused by undetected wings of
-galaxies and stars in the Sky measurements, NoiseChisel only uses tiles
-that have a sufficiently large fraction of undetected pixels. Until now the
-reference for this fraction was the whole tile size. With this option, it
-is now possible to ask for ignoring blank pixels when calculating the
-fraction. This is useful when blank/masked pixels are distributed across
-the image. For more, see the description of this option in @ref{Detection
-options}.
+@option{--skyfracnoblank}: To reduce the bias caused by undetected wings of 
galaxies and stars in the Sky measurements, NoiseChisel only uses tiles that 
have a sufficiently large fraction of undetected pixels.
+Until now the reference for this fraction was the whole tile size.
+With this option, it is now possible to ask for ignoring blank pixels when 
calculating the fraction.
+This is useful when blank/masked pixels are distributed across the image.
+For more, see the description of this option in @ref{Detection options}.
 
 @item
-@option{dopening}: Number of openings after applying
-@option{--dthresh}. For more, see the description of this option in
-@ref{Detection options}.
+@option{--dopening}: Number of openings after applying @option{--dthresh}.
+For more, see the description of this option in @ref{Detection options}.
 
 @item
-@option{dopeningngb}: Number of openings after applying
-@option{--dthresh}. For more, see the description of this option in
-@ref{Detection options}.
+@option{--dopeningngb}: The connectivity/neighbors used in the opening of 
@option{--dopening}.
 
 @item
-@option{--holengb}: The connectivity (defined by the number of neighbors)
-to fill holes after applying @option{--dthresh} (above) to find
-pseudo-detections. For more, see the description of this option in
-@ref{Detection options}.
+@option{--holengb}: The connectivity (defined by the number of neighbors) to 
fill holes after applying @option{--dthresh} (above) to find pseudo-detections.
+For more, see the description of this option in @ref{Detection options}.
 
 @item
-@option{--pseudoconcomp}: The connectivity (defined by the number of
-neighbors) to find individual pseudo-detections. For more, see the
-description of this option in @ref{Detection options}.
+@option{--pseudoconcomp}: The connectivity (defined by the number of 
neighbors) to find individual pseudo-detections.
+For more, see the description of this option in @ref{Detection options}.
 
 @item
-@option{--snthresh}: Manually set the S/N of true pseudo-detections and
-thus avoid the need to manually identify this value. For more, see the
-description of this option in @ref{Detection options}.
+@option{--snthresh}: Manually set the S/N of true pseudo-detections and thus 
avoid the need to automatically identify this value.
+For more, see the description of this option in @ref{Detection options}.
 
 @item
-@option{--detgrowquant}: is used to grow the final true detections until a
-given quantile in the same way that clumps are grown during segmentation
-(compare columns 2 and 3 in Figure 10 of the paper). It replaces the old
-@option{--dilate} option in the paper and older versions of
-Gnuastro. Dilation is a blind growth method which causes objects to be boxy
-or diamond shaped when too many layers are added. However, with the growth
-method that is defined now, we can follow the signal into the noise with
-any shape. The appropriate quantile depends on your dataset's correlated
-noise properties and how cleanly it was Sky subtracted. The new
-@option{--detgrowmaxholesize} can also be used to specify the maximum hole
-size to fill as part of this growth, see the description in @ref{Detection
-options} for more details.
-
-This new growth process can be much more successful in detecting diffuse
-flux around true detections compared to dilation and give more realistic
-results, but it can also increase the NoiseChisel run time (depending on
-the given value and input size).
+@option{--detgrowquant}: is used to grow the final true detections until a 
given quantile in the same way that clumps are grown during segmentation 
(compare columns 2 and 3 in Figure 10 of the paper).
+It replaces the old @option{--dilate} option in the paper and older versions 
of Gnuastro.
+Dilation is a blind growth method which causes objects to be boxy or diamond 
shaped when too many layers are added.
+However, with the growth method that is defined now, we can follow the signal 
into the noise with any shape.
+The appropriate quantile depends on your dataset's correlated noise properties 
and how cleanly it was Sky subtracted.
+The new @option{--detgrowmaxholesize} can also be used to specify the maximum 
hole size to fill as part of this growth, see the description in @ref{Detection 
options} for more details.
+
+This new growth process can be much more successful in detecting diffuse flux 
around true detections compared to dilation and give more realistic results, 
but it can also increase the NoiseChisel run time (depending on the given value 
and input size).
 
 @item
 @option{--cleangrowndet}: Further clean/remove false detections after
@@ -17465,12 +12859,9 @@ growth, see the descriptions under this option in 
@ref{Detection options}.
 @node Invoking astnoisechisel,  , NoiseChisel changes after publication, 
NoiseChisel
 @subsection Invoking NoiseChisel
 
-NoiseChisel will detect signal in noise producing a multi-extension dataset
-containing a binary detection map which is the same size as the input. Its
-output can be readily used for input into @ref{Segment}, for higher-level
-segmentation, or @ref{MakeCatalog} to do measurements and generate a
-catalog. The executable name is @file{astnoisechisel} with the following
-general template
+NoiseChisel will detect signal in noise producing a multi-extension dataset 
containing a binary detection map which is the same size as the input.
+Its output can be readily used for input into @ref{Segment}, for higher-level 
segmentation, or @ref{MakeCatalog} to do measurements and generate a catalog.
+The executable name is @file{astnoisechisel} with the following general 
template
 
 @example
 $ astnoisechisel [OPTION ...] InputImage.fits
@@ -17494,82 +12885,53 @@ $ astnoisechisel --numchannels=4,1 --tilesize=100,100 
input.fits
 
 @cindex Gaussian
 @noindent
-If NoiseChisel is to do processing (for example you don't want to get help,
-or see the values to each input parameter), an input image should be
-provided with the recognized extensions (see @ref{Arguments}).  NoiseChisel
-shares a large set of common operations with other Gnuastro programs,
-mainly regarding input/output, general processing steps, and general
-operating modes. To help in a unified experience between all of Gnuastro's
-programs, these operations have the same command-line options, see
-@ref{Common options} for a full list/description (they are not repeated
-here).
-
-As in all Gnuastro programs, options can also be given to NoiseChisel in
-configuration files. For a thorough description on Gnuastro's configuration
-file parsing, please see @ref{Configuration files}. All of NoiseChisel's
-options with a short description are also always available on the
-command-line with the @option{--help} option, see @ref{Getting help}. To
-inspect the option values without actually running NoiseChisel, append your
-command with @option{--printparams} (or @option{-P}).
-
-NoiseChisel's input image may contain blank elements (see @ref{Blank
-pixels}). Blank elements will be ignored in all steps of NoiseChisel. Hence
-if your dataset has bad pixels which should be masked with a mask image,
-please use Gnuastro's @ref{Arithmetic} program (in particular its
-@command{where} operator) to convert those pixels to blank pixels before
-running NoiseChisel. Gnuastro's Arithmetic program has bitwise operators
-helping you select specific kinds of bad-pixels when necessary.
-
-A convolution kernel can also be optionally given. If a value (file name)
-is given to @option{--kernel} on the command-line or in a configuration
-file (see @ref{Configuration files}), then that file will be used to
-convolve the image prior to thresholding. Otherwise a default kernel will
-be used. For a 2D image, the default kernel is a 2D Gaussian with a FWHM of
-2 pixels truncated at 5 times the FWHM. This choice of the default kernel
-is discussed in Section 3.1.1 of @url{https://arxiv.org/abs/1505.01664,
-Akhlaghi and Ichikawa [2015]}. For a 3D cube, it is a Gaussian with FWHM of
-1.5 pixels in the first two dimensions and 0.75 pixels in the third
-dimension. See @ref{Convolution kernel} for kernel related options. Passing
-@code{none} to @option{--kernel} will disable convolution. On the other
-hand, through the @option{--convolved} option, you may provide an already
-convolved image, see descriptions below for more.
-
-NoiseChisel defines two tessellations over the input (see
-@ref{Tessellation}). This enables it to deal with possible gradients in the
-input dataset and also significantly improve speed by processing each tile
-on different threads simultaneously. Tessellation related options are
-discussed in @ref{Processing options}. In particular, NoiseChisel uses two
-tessellations (with everything between them identical except the tile
-sizes): a fine-grained one with smaller tiles (used in thresholding and Sky
-value estimations) and another with larger tiles which is used for
-pseudo-detections over non-detected regions of the image. The common
-Tessellation options described in @ref{Processing options} define all
-parameters of both tessellations. The large tile size for the latter
-tessellation is set through the @option{--largetilesize} option. To inspect
-the tessellations on your input dataset, run NoiseChisel with
-@option{--checktiles}.
+If NoiseChisel is to do processing (for example you don't want to get help, or 
see the values to each input parameter), an input image should be provided with 
the recognized extensions (see @ref{Arguments}).
+NoiseChisel shares a large set of common operations with other Gnuastro 
programs, mainly regarding input/output, general processing steps, and general 
operating modes.
+To help in a unified experience between all of Gnuastro's programs, these 
operations have the same command-line options, see @ref{Common options} for a 
full list/description (they are not repeated here).
+
+As in all Gnuastro programs, options can also be given to NoiseChisel in 
configuration files.
+For a thorough description on Gnuastro's configuration file parsing, please 
see @ref{Configuration files}.
+All of NoiseChisel's options with a short description are also always 
available on the command-line with the @option{--help} option, see @ref{Getting 
help}.
+To inspect the option values without actually running NoiseChisel, append your 
command with @option{--printparams} (or @option{-P}).
+
+NoiseChisel's input image may contain blank elements (see @ref{Blank pixels}).
+Blank elements will be ignored in all steps of NoiseChisel.
+Hence if your dataset has bad pixels which should be masked with a mask image, 
please use Gnuastro's @ref{Arithmetic} program (in particular its 
@command{where} operator) to convert those pixels to blank pixels before 
running NoiseChisel.
+Gnuastro's Arithmetic program has bitwise operators helping you select 
specific kinds of bad-pixels when necessary.
+
+A convolution kernel can also be optionally given.
+If a value (file name) is given to @option{--kernel} on the command-line or in 
a configuration file (see @ref{Configuration files}), then that file will be 
used to convolve the image prior to thresholding.
+Otherwise a default kernel will be used.
+For a 2D image, the default kernel is a 2D Gaussian with a FWHM of 2 pixels 
truncated at 5 times the FWHM.
+This choice of the default kernel is discussed in Section 3.1.1 of 
@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}.
+For a 3D cube, it is a Gaussian with FWHM of 1.5 pixels in the first two 
dimensions and 0.75 pixels in the third dimension.
+See @ref{Convolution kernel} for kernel related options.
+Passing @code{none} to @option{--kernel} will disable convolution.
+On the other hand, through the @option{--convolved} option, you may provide an 
already convolved image, see descriptions below for more.
+
+NoiseChisel defines two tessellations over the input (see @ref{Tessellation}).
+This enables it to deal with possible gradients in the input dataset and also 
significantly improve speed by processing each tile on different threads 
simultaneously.
+Tessellation related options are discussed in @ref{Processing options}.
+In particular, NoiseChisel uses two tessellations (with everything between 
them identical except the tile sizes): a fine-grained one with smaller tiles 
(used in thresholding and Sky value estimations) and another with larger tiles 
which is used for pseudo-detections over non-detected regions of the image.
+The common Tessellation options described in @ref{Processing options} define 
all parameters of both tessellations.
+The large tile size for the latter tessellation is set through the 
@option{--largetilesize} option.
+To inspect the tessellations on your input dataset, run NoiseChisel with 
@option{--checktiles}.
 
 @cartouche
 @noindent
-@strong{Usage TIP:} Frequently use the options starting with
-@option{--check}. Since the noise properties differ between different
-datasets, you can often play with the parameters/options for a better
-result than the default parameters. You can start with
-@option{--checkdetection} for the main steps. For the full list of
-NoiseChisel's checking options please run:
+@strong{Usage TIP:} Frequently use the options starting with @option{--check}.
+Since the noise properties differ between different datasets, you can often 
play with the parameters/options for a better result than the default 
parameters.
+You can start with @option{--checkdetection} for the main steps.
+For the full list of NoiseChisel's checking options please run:
 @example
 $ astnoisechisel --help | grep check
 @end example
 @end cartouche
 
-When working on 3D datacubes, the tessellation options need three values
-and updating them every time can be annoying/buggy. To simplify the job,
-NoiseChisel also installs a @file{astnoisechisel-3d.conf} configuration
-file (see @ref{Configuration files}). You can use this for default values
-on datacubes. For example, if you installed Gnuastro with the prefix
-@file{/usr/local} (the default location, see @ref{Installation directory}),
-you can benefit from this configuration file by running NoiseChisel like
-the example below.
+When working on 3D datacubes, the tessellation options need three values and 
updating them every time can be annoying/buggy.
+To simplify the job, NoiseChisel also installs a @file{astnoisechisel-3d.conf} 
configuration file (see @ref{Configuration files}).
+You can use this for default values on datacubes.
+For example, if you installed Gnuastro with the prefix @file{/usr/local} (the 
default location, see @ref{Installation directory}), you can benefit from this 
configuration file by running NoiseChisel like the example below.
 
 @example
 $ astnoisechisel cube.fits                                      \
@@ -17580,11 +12942,8 @@ $ astnoisechisel cube.fits                             
         \
 @cindex Alias (shell)
 @cindex Shell startup
 @cindex Startup, shell
-To further simplify the process, you can define a shell alias in any
-startup file (for example @file{~/.bashrc}, see @ref{Installation
-directory}). Assuming that you installed Gnuastro in @file{/usr/local}, you
-can add this line to the startup file (you may put it all in one line, it
-is broken into two lines here for fitting within page limits).
+To further simplify the process, you can define a shell alias in any startup 
file (for example @file{~/.bashrc}, see @ref{Installation directory}).
+Assuming that you installed Gnuastro in @file{/usr/local}, you can add this 
line to the startup file (you may put it all in one line, it is broken into two 
lines here for fitting within page limits).
 
 @example
 alias astnoisechisel-3d="astnoisechisel
@@ -17592,44 +12951,30 @@ alias astnoisechisel-3d="astnoisechisel
 @end example
 
 @noindent
-Using this alias, you can call NoiseChisel with the name
-@command{astnoisechisel-3d} (instead of @command{astnoisechisel}). It will
-automatically load the 3D specific configuration file first, and then parse
-any other arguments, options or configuration files. You can change the
-default values in this 3D configuration file by calling them on the
-command-line as you do with @command{astnoisechisel}@footnote{Recall that
-for single-invocation options, the last command-line invocation takes
-precedence over all previous invocations (including those in the 3D
-configuration file). See the description of @option{--config} in
-@ref{Operating mode options}.}. For example:
+Using this alias, you can call NoiseChisel with the name 
@command{astnoisechisel-3d} (instead of @command{astnoisechisel}).
+It will automatically load the 3D specific configuration file first, and then 
parse any other arguments, options or configuration files.
+You can change the default values in this 3D configuration file by calling 
them on the command-line as you do with 
@command{astnoisechisel}@footnote{Recall that for single-invocation options, 
the last command-line invocation takes precedence over all previous invocations 
(including those in the 3D configuration file).
+See the description of @option{--config} in @ref{Operating mode options}.}.
+For example:
 
 @example
 $ astnoisechisel-3d --numchannels=3,3,1 cube.fits
 @end example
 
-In the sections below, NoiseChisel's options are classified into three
-general classes to help in easy navigation. @ref{NoiseChisel input} mainly
-discusses the options relating to input and those that are shared in both
-detection and segmentation. Options to configure the detection are
-described in @ref{Detection options} and @ref{Segmentation options} we
-discuss how you can fine-tune the segmentation of the detections. Finally
-in @ref{NoiseChisel output} the format of NoiseChisel's output is
-discussed. The order of options here follow the same logical order that the
-respective action takes place within NoiseChisel (note that the output of
-@option{--help} is sorted alphabetically).
-
-Below, we'll discuss NoiseChisel's options, classified into two general
-classes, to help in easy navigation. @ref{NoiseChisel input} mainly
-discusses the basic options relating to inputs and prior to the detection
-process detection. Afterwards, @ref{Detection options} fully describes
-every configuration parameter (option) related to detection and how they
-affect the final result. The order of options in this section follow the
-logical order within NoiseChisel. On first reading (while you are still new
-to NoiseChisel), it is therefore strongly recommended to read the options
-in the given order below. The output of @option{--printparams} (or
-@option{-P}) also has this order. However, the output of @option{--help} is
-sorted alphabetically. Finally, in @ref{NoiseChisel output} the format of
-NoiseChisel's output is discussed.
+In the sections below, NoiseChisel's options are classified into three general 
classes to help in easy navigation.
+@ref{NoiseChisel input} mainly discusses the options relating to input and 
those that are shared in both detection and segmentation.
+Options to configure the detection are described in @ref{Detection options} 
and @ref{Segmentation options} we discuss how you can fine-tune the 
segmentation of the detections.
+Finallyin @ref{NoiseChisel output} the format of NoiseChisel's output is 
discussed.
+The order of options here follow the same logical order that the respective 
action takes place within NoiseChisel (note that the output of @option{--help} 
is sorted alphabetically).
+
+Below, we'll discuss NoiseChisel's options, classified into two general 
classes, to help in easy navigation.
+@ref{NoiseChisel input} mainly discusses the basic options relating to inputs 
and prior to the detection process detection.
+Afterwards, @ref{Detection options} fully describes every configuration 
parameter (option) related to detection and how they affect the final result.
+The order of options in this section follow the logical order within 
NoiseChisel.
+On first reading (while you are still new to NoiseChisel), it is therefore 
strongly recommended to read the options in the given order below.
+The output of @option{--printparams} (or @option{-P}) also has this order.
+However, the output of @option{--help} is sorted alphabetically.
+Finally, in @ref{NoiseChisel output} the format of NoiseChisel's output is 
discussed.
 
 
 @menu
@@ -17641,99 +12986,64 @@ NoiseChisel's output is discussed.
 @node NoiseChisel input, Detection options, Invoking astnoisechisel, Invoking 
astnoisechisel
 @subsubsection NoiseChisel input
 
-The options here can be used to configure the inputs and output of
-NoiseChisel, along with some general processing options. Recall that you
-can always see the full list of Gnuastro's options with the @option{--help}
-(see @ref{Getting help}), or @option{--printparams} (or @option{-P}) to see
-their values (see @ref{Operating mode options}).
+The options here can be used to configure the inputs and output of 
NoiseChisel, along with some general processing options.
+Recall that you can always see the full list of Gnuastro's options with the 
@option{--help} (see @ref{Getting help}), or @option{--printparams} (or 
@option{-P}) to see their values (see @ref{Operating mode options}).
 
 @table @option
 
 @item -k STR
 @itemx --kernel=STR
-File name of kernel to smooth the image before applying the threshold, see
-@ref{Convolution kernel}. If no convolution is needed, give this option a
-value of @option{none}.
-
-The first step of NoiseChisel is to convolve/smooth the image and use the
-convolved image in multiple steps including the finding and applying of the
-quantile threshold (see @option{--qthresh}).
-
-The @option{--kernel} option is not mandatory. If not called, a 2D Gaussian
-profile with a FWHM of 2 pixels truncated at 5 times the FWHM is used. This
-choice of the default kernel is discussed in Section 3.1.1 of Akhlaghi and
-Ichikawa [2015].
-
-For a 3D cube, when no file name is given to @option{--kernel}, a Gaussian
-with FWHM of 1.5 pixels in the first two dimensions and 0.75 pixels in the
-third dimension will be used. The reason for this particular configuration
-is that commonly in astronomical applications, 3D datasets don't have the
-same nature in all three dimensions, commonly the first two dimensions are
-spatial (RA and Dec) while the third is spectral (for example
-wavelength). The samplings are also different, in the default case, the
-spatial sampling is assumed to be larger than the spectral sampling, hence
-a wider FWHM in the spatial directions, see @ref{Sampling theorem}.
-
-You can use MakeProfiles to build a kernel with any of its recognized
-profile types and parameters. For more details, please see
-@ref{MakeProfiles output dataset}. For example, the command below will make
-a Moffat kernel (with @mymath{\beta=2.8}) with FWHM of 2 pixels truncated
-at 10 times the FWHM.
+File name of kernel to smooth the image before applying the threshold, see 
@ref{Convolution kernel}.
+If no convolution is needed, give this option a value of @option{none}.
+
+The first step of NoiseChisel is to convolve/smooth the image and use the 
convolved image in multiple steps including the finding and applying of the 
quantile threshold (see @option{--qthresh}).
+The @option{--kernel} option is not mandatory.
+If not called, for a 2D, image a 2D Gaussian profile with a FWHM of 2 pixels 
truncated at 5 times the FWHM is used.
+This choice of the default kernel is discussed in Section 3.1.1 of Akhlaghi 
and Ichikawa [2015].
+
+For a 3D cube, when no file name is given to @option{--kernel}, a Gaussian 
with FWHM of 1.5 pixels in the first two dimensions and 0.75 pixels in the 
third dimension will be used.
+The reason for this particular configuration is that commonly in astronomical 
applications, 3D datasets don't have the same nature in all three dimensions, 
commonly the first two dimensions are spatial (RA and Dec) while the third is 
spectral (for example wavelength).
+The samplings are also different, in the default case, the spatial sampling is 
assumed to be larger than the spectral sampling, hence a wider FWHM in the 
spatial directions, see @ref{Sampling theorem}.
+
+You can use MakeProfiles to build a kernel with any of its recognized profile 
types and parameters.
+For more details, please see @ref{MakeProfiles output dataset}.
+For example, the command below will make a Moffat kernel (with 
@mymath{\beta=2.8}) with FWHM of 2 pixels truncated at 10 times the FWHM.
 
 @example
 $ astmkprof --oversample=1 --kernel=moffat,2,2.8,10
 @end example
 
-Since convolution can be the slowest step of NoiseChisel, for large
-datasets, you can convolve the image once with Gnuastro's Convolve (see
-@ref{Convolve}), and use the @option{--convolved} option to feed it
-directly to NoiseChisel. This can help getting faster results when you are
-playing/testing the higher-level options.
+Since convolution can be the slowest step of NoiseChisel, for large datasets, 
you can convolve the image once with Gnuastro's Convolve (see @ref{Convolve}), 
and use the @option{--convolved} option to feed it directly to NoiseChisel.
+This can help getting faster results when you are playing/testing the 
higher-level options.
 
 @item --khdu=STR
 HDU containing the kernel in the file given to the @option{--kernel}
 option.
 
 @item --convolved=STR
-Use this file as the convolved image and don't do convolution (ignore
-@option{--kernel}). NoiseChisel will just check the size of the given
-dataset is the same as the input's size. If a wrong image (with the same
-size) is given to this option, the results (errors, bugs, and etc) are
-unpredictable. So please use this option with care and in a highly
-controlled environment, for example in the scenario discussed below.
-
-In almost all situations, as the input gets larger, the single most CPU
-(and time) consuming step in NoiseChisel (and other programs that need a
-convolved image) is convolution. Therefore minimizing the number of
-convolutions can save a significant amount of time in some scenarios. One
-such scenario is when you want to segment NoiseChisel's detections using
-the same kernel (with @ref{Segment}, which also supports this
-@option{--convolved} option). This scenario would require two convolutions
-of the same dataset: once by NoiseChisel and once by Segment. Using this
-option in both programs, only one convolution (prior to running
-NoiseChisel) is enough.
-
-Another common scenario where this option can be convenient is when you are
-testing NoiseChisel (or Segment) for the best parameters. You have to run
-NoiseChisel multiple times and see the effect of each change. However, once
-you are happy with the kernel, re-convolving the input on every change of
-higher-level parameters will greatly hinder, or discourage, further
-testing. With this option, you can convolve the input image with your
-chosen kernel once before running NoiseChisel, then feed it to NoiseChisel
-on each test run and thus save valuable time for better/more tests.
-
-To build your desired convolution kernel, you can use
-@ref{MakeProfiles}. To convolve the image with a given kernel you can use
-@ref{Convolve}. Spatial domain convolution is mandatory: in the frequency
-domain, blank pixels (if present) will cover the whole image and gradients
-will appear on the edges, see @ref{Spatial vs. Frequency domain}.
-
-Below you can see an example of the second scenario: you want to see how
-variation of the growth level (through the @option{--detgrowquant} option)
-will affect the final result. Recall that you can ignore all the extra
-spaces, new lines, and backslash's (`@code{\}') if you are typing in the
-terminal. In a shell script, remove the @code{$} signs at the start of the
-lines.
+Use this file as the convolved image and don't do convolution (ignore 
@option{--kernel}).
+NoiseChisel will just check the size of the given dataset is the same as the 
input's size.
+If a wrong image (with the same size) is given to this option, the results 
(errors, bugs, etc) are unpredictable.
+So please use this option with care and in a highly controlled environment, 
for example in the scenario discussed below.
+
+In almost all situations, as the input gets larger, the single most CPU (and 
time) consuming step in NoiseChisel (and other programs that need a convolved 
image) is convolution.
+Therefore minimizing the number of convolutions can save a significant amount 
of time in some scenarios.
+One such scenario is when you want to segment NoiseChisel's detections using 
the same kernel (with @ref{Segment}, which also supports this 
@option{--convolved} option).
+This scenario would require two convolutions of the same dataset: once by 
NoiseChisel and once by Segment.
+Using this option in both programs, only one convolution (prior to running 
NoiseChisel) is enough.
+
+Another common scenario where this option can be convenient is when you are 
testing NoiseChisel (or Segment) for the best parameters.
+You have to run NoiseChisel multiple times and see the effect of each change.
+However, once you are happy with the kernel, re-convolving the input on every 
change of higher-level parameters will greatly hinder, or discourage, further 
testing.
+With this option, you can convolve the input image with your chosen kernel 
once before running NoiseChisel, then feed it to NoiseChisel on each test run 
and thus save valuable time for better/more tests.
+
+To build your desired convolution kernel, you can use @ref{MakeProfiles}.
+To convolve the image with a given kernel you can use @ref{Convolve}.
+Spatial domain convolution is mandatory: in the frequency domain, blank pixels 
(if present) will cover the whole image and gradients will appear on the edges, 
see @ref{Spatial vs. Frequency domain}.
+
+Below you can see an example of the second scenario: you want to see how 
variation of the growth level (through the @option{--detgrowquant} option) will 
affect the final result.
+Recall that you can ignore all the extra spaces, new lines, and backslash's 
(`@code{\}') if you are typing in the terminal.
+In a shell script, remove the @code{$} signs at the start of the lines.
 
 @example
 ## Make the kernel to convolve with.
@@ -17753,368 +13063,254 @@ $ for g in 60 65 70 75 80 85 90; do                 
         \
 
 
 @item --chdu=STR
-The HDU/extension containing the convolved image in the file given to
-@option{--convolved}.
+The HDU/extension containing the convolved image in the file given to 
@option{--convolved}.
 
 @item -w STR
 @itemx --widekernel=STR
-File name of a wider kernel to use in estimating the difference of the mode
-and median in a tile (this difference is used to identify the significance
-of signal in that tile, see @ref{Quantifying signal in a tile}). As
-displayed in Figure 4 of @url{https://arxiv.org/abs/1505.01664, Akhlaghi
-and Ichikawa [2015]}, a wider kernel will help in identifying the skewness
-caused by data in noise. The image that is convolved with this kernel is
-@emph{only} used for this purpose. Once the mode is found to be
-sufficiently close to the median, the quantile threshold is found on the
-image convolved with the sharper kernel (@option{--kernel}), see
-@option{--qthresh}).
-
-Since convolution will significantly slow down the processing, this feature
-is optional. When it isn't given, the image that is convolved with
-@option{--kernel} will be used to identify good tiles @emph{and} apply the
-quantile threshold. This option is mainly useful in conditions were you
-have a very large, extended, diffuse signal that is still present in the
-usable tiles when using @option{--kernel}. See @ref{Detecting large
-extended targets} for a practical demonstration on how to inspect the tiles
-used in identifying the quantile threshold.
+File name of a wider kernel to use in estimating the difference of the mode 
and median in a tile (this difference is used to identify the significance of 
signal in that tile, see @ref{Quantifying signal in a tile}).
+As displayed in Figure 4 of @url{https://arxiv.org/abs/1505.01664, Akhlaghi 
and Ichikawa [2015]}, a wider kernel will help in identifying the skewness 
caused by data in noise.
+The image that is convolved with this kernel is @emph{only} used for this 
purpose.
+Once the mode is found to be sufficiently close to the median, the quantile 
threshold is found on the image convolved with the sharper kernel 
(@option{--kernel}), see @option{--qthresh}).
+
+Since convolution will significantly slow down the processing, this feature is 
optional.
+When it isn't given, the image that is convolved with @option{--kernel} will 
be used to identify good tiles @emph{and} apply the quantile threshold.
+This option is mainly useful in conditions were you have a very large, 
extended, diffuse signal that is still present in the usable tiles when using 
@option{--kernel}.
+See @ref{Detecting large extended targets} for a practical demonstration on 
how to inspect the tiles used in identifying the quantile threshold.
 
 @item --whdu=STR
 HDU containing the kernel file given to the @option{--widekernel} option.
 
 @item -L INT[,INT]
 @itemx --largetilesize=INT[,INT]
-The size of each tile for the tessellation with the larger tile
-sizes. Except for the tile size, all the other parameters for this
-tessellation are taken from the common options described in @ref{Processing
-options}. The format is identical to that of the @option{--tilesize} option
-that is discussed in that section.
+The size of each tile for the tessellation with the larger tile sizes.
+Except for the tile size, all the other parameters for this tessellation are 
taken from the common options described in @ref{Processing options}.
+The format is identical to that of the @option{--tilesize} option that is 
discussed in that section.
 @end table
 
 @node Detection options, NoiseChisel output, NoiseChisel input, Invoking 
astnoisechisel
 @subsubsection Detection options
 
-Detection is the process of separating the pixels in the image into two
-groups: 1) Signal, and 2) Noise. Through the parameters below, you can
-customize the detection process in NoiseChisel. Recall that you can always
-see the full list of NoiseChisel's options with the @option{--help} (see
-@ref{Getting help}), or @option{--printparams} (or @option{-P}) to see
-their values (see @ref{Operating mode options}).
+Detection is the process of separating the pixels in the image into two 
groups: 1) Signal, and 2) Noise.
+Through the parameters below, you can customize the detection process in 
NoiseChisel.
+Recall that you can always see the full list of NoiseChisel's options with the 
@option{--help} (see @ref{Getting help}), or @option{--printparams} (or 
@option{-P}) to see their values (see @ref{Operating mode options}).
 
 @table @option
 
 @item -Q FLT
 @itemx --meanmedqdiff=FLT
-The maximum acceptable distance between the quantiles of the mean and
-median in each tile, see @ref{Quantifying signal in a tile}. The quantile
-threshold estimates are measured on tiles where the quantiles of their mean
-and median are less distant than the value given to this option. For
-example @option{--meanmedqdiff=0.01} means that only tiles where the mean's
-quantile is between 0.49 and 0.51 (recall that the median's quantile is
-0.5) will be used.
+The maximum acceptable distance between the quantiles of the mean and median 
in each tile, see @ref{Quantifying signal in a tile}.
+The quantile threshold estimates are measured on tiles where the quantiles of 
their mean and median are less distant than the value given to this option.
+For example @option{--meanmedqdiff=0.01} means that only tiles where the 
mean's quantile is between 0.49 and 0.51 (recall that the median's quantile is 
0.5) will be used.
 
 @item --outliersclip=FLT,FLT
-@mymath{\sigma}-clipping parameters for the outlier rejection of the
-quantile threshold. The format of the given values is similar to
-@option{--sigmaclip} below. In NoiseChisel, outlier rejection on tiles is
-used when identifying the quantile thresholds (@option{--qthresh},
-@option{--noerodequant}, and @option{detgrowquant}).
-
-Outlier rejection is useful when the dataset contains a large and diffuse
-(almost flat within each tile) signal. The flatness of the profile will
-cause it to successfully pass the mean-median quantile difference test, so
-we'll need to use the distribution of successful tiles for removing these
-false positives. For more, see the latter half of @ref{Quantifying signal
-in a tile}.
+@mymath{\sigma}-clipping parameters for the outlier rejection of the quantile 
threshold.
+The format of the given values is similar to @option{--sigmaclip} below.
+In NoiseChisel, outlier rejection on tiles is used when identifying the 
quantile thresholds (@option{--qthresh}, @option{--noerodequant}, and 
@option{detgrowquant}).
+
+Outlier rejection is useful when the dataset contains a large and diffuse 
(almost flat within each tile) signal.
+The flatness of the profile will cause it to successfully pass the mean-median 
quantile difference test, so we'll need to use the distribution of successful 
tiles for removing these false positives.
+For more, see the latter half of @ref{Quantifying signal in a tile}.
 
 @item --outliersigma=FLT
-Multiple of sigma to define an outlier. If this option is given a value of
-zero, no outlier rejection will take place. For more see
-@option{--outliersclip} and the latter half of @ref{Quantifying signal in a
-tile}.
+Multiple of sigma to define an outlier.
+If this option is given a value of zero, no outlier rejection will take place.
+For more see @option{--outliersclip} and the latter half of @ref{Quantifying 
signal in a tile}.
 
 @item -t FLT
 @itemx --qthresh=FLT
-The quantile threshold to apply to the convolved image. The detection
-process begins with applying a quantile threshold to each of the tiles in
-the small tessellation. The quantile is only calculated for tiles that
-don't have any significant signal within them, see @ref{Quantifying signal
-in a tile}. Interpolation is then used to give a value to the unsuccessful
-tiles and it is finally smoothed.
+The quantile threshold to apply to the convolved image.
+The detection process begins with applying a quantile threshold to each of the 
tiles in the small tessellation.
+The quantile is only calculated for tiles that don't have any significant 
signal within them, see @ref{Quantifying signal in a tile}.
+Interpolation is then used to give a value to the unsuccessful tiles and it is 
finally smoothed.
 
 @cindex Quantile
 @cindex Binary image
 @cindex Foreground pixels
 @cindex Background pixels
-The quantile value is a floating point value between 0 and 1. Assume that
-we have sorted the @mymath{N} data elements of a distribution (the pixels
-in each mesh on the convolved image). The quantile (@mymath{q}) of this
-distribution is the value of the element with an index of (the nearest
-integer to) @mymath{q\times{N}} in the sorted data set. After thresholding
-is complete, we will have a binary (two valued) image. The pixels above the
-threshold are known as foreground pixels (have a value of 1) while those
-which lie below the threshold are known as background (have a value of 0).
+The quantile value is a floating point value between 0 and 1.
+Assume that we have sorted the @mymath{N} data elements of a distribution (the 
pixels in each mesh on the convolved image).
+The quantile (@mymath{q}) of this distribution is the value of the element 
with an index of (the nearest integer to) @mymath{q\times{N}} in the sorted 
data set.
+After thresholding is complete, we will have a binary (two valued) image.
+The pixels above the threshold are known as foreground pixels (have a value of 
1) while those which lie below the threshold are known as background (have a 
value of 0).
 
 @item --smoothwidth=INT
-Width of flat kernel used to smooth the interpolated quantile thresholds,
-see @option{--qthresh} for more.
+Width of flat kernel used to smooth the interpolated quantile thresholds, see 
@option{--qthresh} for more.
 
 @cindex NaN
 @item --checkqthresh
-Check the quantile threshold values on the mesh grid. A file suffixed with
-@file{_qthresh.fits} will be created showing each step. With this option,
-NoiseChisel will abort as soon as quantile estimation has been completed,
-allowing you to inspect the steps leading to the final quantile threshold,
-this can be disabled with @option{--continueaftercheck}. By default the
-output will have the same pixel size as the input, but with the
-@option{--oneelempertile} option, only one pixel will be used for each tile
-(see @ref{Processing options}).
+Check the quantile threshold values on the mesh grid.
+A file suffixed with @file{_qthresh.fits} will be created showing each step.
+With this option, NoiseChisel will abort as soon as quantile estimation has 
been completed, allowing you to inspect the steps leading to the final quantile 
threshold, this can be disabled with @option{--continueaftercheck}.
+By default the output will have the same pixel size as the input, but with the 
@option{--oneelempertile} option, only one pixel will be used for each tile 
(see @ref{Processing options}).
 
 @item --blankasforeground
-In the erosion and opening steps below, treat blank elements as foreground
-(regions above the threshold). By default, blank elements in the dataset
-are considered to be background, so if a foreground pixel is touching it,
-it will be eroded. This option is irrelevant if the datasets contains no
-blank elements.
+In the erosion and opening steps below, treat blank elements as foreground 
(regions above the threshold).
+By default, blank elements in the dataset are considered to be background, so 
if a foreground pixel is touching it, it will be eroded.
+This option is irrelevant if the datasets contains no blank elements.
 
-When there are many blank elements in the dataset, treating them as
-foreground will systematically erode their regions less, therefore
-systematically creating more false positives. So use this option (when
-blank values are present) with care.
+When there are many blank elements in the dataset, treating them as foreground 
will systematically erode their regions less, therefore systematically creating 
more false positives.
+So use this option (when blank values are present) with care.
 
 @item -e INT
 @itemx --erode=INT
 @cindex Erosion
-The number of erosions to apply to the binary thresholded image. Erosion is
-simply the process of flipping (from 1 to 0) any of the foreground pixels
-that neighbor a background pixel. In a 2D image, there are two kinds of
-neighbors, 4-connected and 8-connected neighbors. In a 3D dataset, there
-are three: 6-connected, 18-connected, and 26-connected. You can specify
-which class of neighbors should be used for erosion with the
-@option{--erodengb} option, see below.
-
-Erosion has the effect of shrinking the foreground (above threshold)
-pixels. To put it another way, it expands the holes. This is a founding
-principle in NoiseChisel: it exploits the fact that with very low
-thresholds, the holes in the very low surface brightness regions of an
-image will be smaller than regions that have no signal. Therefore by
-expanding those holes, we are able to separate the regions harboring
-signal.
+The number of erosions to apply to the binary thresholded image.
+Erosion is simply the process of flipping (from 1 to 0) any of the foreground 
pixels that neighbor a background pixel.
+In a 2D image, there are two kinds of neighbors, 4-connected and 8-connected 
neighbors.
+In a 3D dataset, there are three: 6-connected, 18-connected, and 26-connected.
+You can specify which class of neighbors should be used for erosion with the 
@option{--erodengb} option, see below.
+
+Erosion has the effect of shrinking the foreground pixels.
+To put it another way, it expands the holes.
+This is a founding principle in NoiseChisel: it exploits the fact that with 
very low thresholds, the holes in the very low surface brightness regions of an 
image will be smaller than regions that have no signal.
+Therefore by expanding those holes, we are able to separate the regions 
harboring signal.
 
 @item --erodengb=INT
-The type of neighborhood (structuring element) used in erosion, see
-@option{--erode} for an explanation on erosion. If the input is a 2D image,
-only two integer values are acceptable: 4 or 8. For a 3D input datacube,
-the acceptable values are: 6, 18 and 26.
-
-In 2D 4-connectivity, the neighbors of a pixel are defined as the four
-pixels on the top, bottom, right and left of a pixel that share an edge
-with it. The 8-connected neighbors on the other hand include the
-4-connected neighbors along with the other 4 pixels that share a corner
-with this pixel. See Figure 6 (a) and (b) in Akhlaghi and Ichikawa (2015)
-for a demonstration. A similar argument applies to 3D datacubes.
+The type of neighborhood (structuring element) used in erosion, see 
@option{--erode} for an explanation on erosion.
+If the input is a 2D image, only two integer values are acceptable: 4 or 8.
+For a 3D input datacube, the acceptable values are: 6, 18 and 26.
+
+In 2D 4-connectivity, the neighbors of a pixel are defined as the four pixels 
on the top, bottom, right and left of a pixel that share an edge with it.
+The 8-connected neighbors on the other hand include the 4-connected neighbors 
along with the other 4 pixels that share a corner with this pixel.
+See Figure 6 (a) and (b) in Akhlaghi and Ichikawa (2015) for a demonstration.
+A similar argument applies to 3D datacubes.
 
 @item --noerodequant
-Pure erosion is going to carve off sharp and small objects completely out
-of the detected regions. This option can be used to avoid missing such
-sharp and small objects (which have significant pixels, but not over a
-large area). All pixels with a value larger than the significance level
-specified by this option will not be eroded during the erosion step
-above. However, they will undergo the erosion and dilation of the opening
-step below.
-
-Like the @option{--qthresh} option, the significance level is
-determined using the quantile (a value between 0 and 1). Just as a
-reminder, in the normal distribution, @mymath{1\sigma}, @mymath{1.5\sigma},
-and @mymath{2\sigma} are approximately on the 0.84, 0.93, and 0.98
-quantiles.
+Pure erosion is going to carve off sharp and small objects completely out of 
the detected regions.
+This option can be used to avoid missing such sharp and small objects (which 
have significant pixels, but not over a large area).
+All pixels with a value larger than the significance level specified by this 
option will not be eroded during the erosion step above.
+However, they will undergo the erosion and dilation of the opening step below.
+
+Like the @option{--qthresh} option, the significance level is determined using 
the quantile (a value between 0 and 1).
+Just as a reminder, in the normal distribution, @mymath{1\sigma}, 
@mymath{1.5\sigma}, and @mymath{2\sigma} are approximately on the 0.84, 0.93, 
and 0.98 quantiles.
 
 @item -p INT
 @itemx --opening=INT
-Depth of opening to be applied to the eroded binary image. Opening is a
-composite operation. When opening a binary image with a depth of
-@mymath{n}, @mymath{n} erosions (explained in @option{--erode}) are
-followed by @mymath{n} dilations. Simply put, dilation is the inverse of
-erosion. When dilating an image any background pixel is flipped (from 0 to
-1) to become a foreground pixel. Dilation has the effect of fattening the
-foreground. Note that in NoiseChisel, the erosion which is part of opening
-is independent of the initial erosion that is done on the thresholded image
-(explained in @option{--erode}). The structuring element for the opening
-can be specified with the @option{--openingngb} option. Opening has the
-effect of removing the thin foreground connections (mostly noise) between
-separate foreground `islands' (detections) thereby completely isolating
-them. Once opening is complete, we have @emph{initial} detections.
+Depth of opening to be applied to the eroded binary image.
+Opening is a composite operation.
+When opening a binary image with a depth of @mymath{n}, @mymath{n} erosions 
(explained in @option{--erode}) are followed by @mymath{n} dilations.
+Simply put, dilation is the inverse of erosion.
+When dilating an image any background pixel is flipped (from 0 to 1) to become 
a foreground pixel.
+Dilation has the effect of fattening the foreground.
+Note that in NoiseChisel, the erosion which is part of opening is independent 
of the initial erosion that is done on the thresholded image (explained in 
@option{--erode}).
+The structuring element for the opening can be specified with the 
@option{--openingngb} option.
+Opening has the effect of removing the thin foreground connections (mostly 
noise) between separate foreground `islands' (detections) thereby completely 
isolating them.
+Once opening is complete, we have @emph{initial} detections.
 
 @item --openingngb=INT
-The class of neighbors used for opening, see @option{--erodengb} for more
-information on acceptable values.
+The structuring element used for opening, see @option{--erodengb} for more 
information about a structuring element.
 
 @item --skyfracnoblank
-Ignore blank pixels when estimating the fraction of undetected pixels for
-Sky estimation. NoiseChisel only measures the Sky over the tiles that have
-a sufficiently large fraction of undetected pixels (value given to
-@option{--minskyfrac}). By default this fraction is found by dividing
-number of undetected pixels in a tile by the tile's area. But this default
-behavior ignores the possibility of blank pixels. In situations that
-blank/masked pixels are scattered across the image and if they are large
-enough, all the tiles can fail the @option{--minskyfrac} test, thus not
-allowing NoiseChisel to proceed. With this option, such scenarios can be
-fixed: the denominator of the fraction will be the number of non-blank
-elements in the tile, not the total tile area.
+Ignore blank pixels when estimating the fraction of undetected pixels for Sky 
estimation.
+NoiseChisel only measures the Sky over the tiles that have a sufficiently 
large fraction of undetected pixels (value given to @option{--minskyfrac}).
+By default this fraction is found by dividing number of undetected pixels in a 
tile by the tile's area.
+But this default behavior ignores the possibility of blank pixels.
+In situations that blank/masked pixels are scattered across the image and if 
they are large enough, all the tiles can fail the @option{--minskyfrac} test, 
thus not allowing NoiseChisel to proceed.
+With this option, such scenarios can be fixed: the denominator of the fraction 
will be the number of non-blank elements in the tile, not the total tile area.
 
 @item -B FLT
 @itemx --minskyfrac=FLT
-Minimum fraction (value between 0 and 1) of Sky (undetected) areas in a
-tile. Only tiles with a fraction of undetected pixels (Sky) larger than
-this value will be used to estimate the Sky value. NoiseChisel uses this
-option value twice to estimate the Sky value: after initial detections and
-in the end when false detections have been removed.
-
-Because of the PSF and their intrinsic amorphous properties, astronomical
-objects (except cosmic rays) never have a clear cutoff and commonly sink
-into the noise very slowly. Even below the very low thresholds used by
-NoiseChisel. So when a large fraction of the area of one mesh is covered by
-detections, it is very plausible that their faint wings are present in the
-undetected regions (hence causing a bias in any measurement). To get an
-accurate measurement of the above parameters over the tessellation, tiles
-that harbor too many detected regions should be excluded. The used tiles
-are visible in the respective @option{--check} option of the given step.
+Minimum fraction (value between 0 and 1) of Sky (undetected) areas in a tile.
+Only tiles with a fraction of undetected pixels (Sky) larger than this value 
will be used to estimate the Sky value.
+NoiseChisel uses this option value twice to estimate the Sky value: after 
initial detections and in the end when false detections have been removed.
+
+Because of the PSF and their intrinsic amorphous properties, astronomical 
objects (except cosmic rays) never have a clear cutoff and commonly sink into 
the noise very slowly.
+Even below the very low thresholds used by NoiseChisel.
+So when a large fraction of the area of one mesh is covered by detections, it 
is very plausible that their faint wings are present in the undetected regions 
(hence causing a bias in any measurement).
+To get an accurate measurement of the above parameters over the tessellation, 
tiles that harbor too many detected regions should be excluded.
+The used tiles are visible in the respective @option{--check} option of the 
given step.
 
 @item --checkdetsky
-Check the initial approximation of the sky value and its standard deviation
-in a FITS file ending with @file{_detsky.fits}. With this option,
-NoiseChisel will abort as soon as the sky value used for defining
-pseudo-detections is complete. This allows you to inspect the steps leading
-to the final quantile threshold, this behavior can be disabled with
-@option{--continueaftercheck}. By default the output will have the same
-pixel size as the input, but with the @option{--oneelempertile} option,
-only one pixel will be used for each tile (see @ref{Processing options}).
+Check the initial approximation of the sky value and its standard deviation in 
a FITS file ending with @file{_detsky.fits}.
+With this option, NoiseChisel will abort as soon as the sky value used for 
defining pseudo-detections is complete.
+This allows you to inspect the steps leading to the final quantile threshold, 
this behavior can be disabled with @option{--continueaftercheck}.
+By default the output will have the same pixel size as the input, but with the 
@option{--oneelempertile} option, only one pixel will be used for each tile 
(see @ref{Processing options}).
 
 @item -s FLT,FLT
 @itemx --sigmaclip=FLT,FLT
-The @mymath{\sigma}-clipping parameters for measuring the initial and final
-Sky values from the undetected pixels, see @ref{Sigma clipping}.
-
-This option takes two values which are separated by a comma (@key{,}). Each
-value can either be written as a single number or as a fraction of two
-numbers (for example @code{3,1/10}). The first value to this option is the
-multiple of @mymath{\sigma} that will be clipped (@mymath{\alpha} in that
-section). The second value is the exit criteria. If it is less than 1, then
-it is interpreted as tolerance and if it is larger than one it is assumed
-to be the fixed number of iterations. Hence, in the latter case the value
-must be an integer.
+The @mymath{\sigma}-clipping parameters for measuring the initial and final 
Sky values from the undetected pixels, see @ref{Sigma clipping}.
+
+This option takes two values which are separated by a comma (@key{,}).
+Each value can either be written as a single number or as a fraction of two 
numbers (for example @code{3,1/10}).
+The first value to this option is the multiple of @mymath{\sigma} that will be 
clipped (@mymath{\alpha} in that section).
+The second value is the exit criteria.
+If it is less than 1, then it is interpreted as tolerance and if it is larger 
than one it is assumed to be the fixed number of iterations.
+Hence, in the latter case the value must be an integer.
 
 @item -R FLT
 @itemx --dthresh=FLT
-The detection threshold: a multiple of the initial Sky standard deviation
-added with the initial Sky approximation (which you can inspect with
-@option{--checkdetsky}). This flux threshold is applied to the initially
-undetected regions on the unconvolved image. The background pixels that are
-completely engulfed in a 4-connected foreground region are converted to
-background (holes are filled) and one opening (depth of 1) is applied over
-both the initially detected and undetected regions. The Signal to noise
-ratio of the resulting `pseudo-detections' are used to identify true
-vs. false detections. See Section 3.1.5 and Figure 7 in Akhlaghi and
-Ichikawa (2015) for a very complete explanation.
+The detection threshold: a multiple of the initial Sky standard deviation 
added with the initial Sky approximation (which you can inspect with 
@option{--checkdetsky}).
+This flux threshold is applied to the initially undetected regions on the 
unconvolved image.
+The background pixels that are completely engulfed in a 4-connected foreground 
region are converted to background (holes are filled) and one opening (depth of 
1) is applied over both the initially detected and undetected regions.
+The Signal to noise ratio of the resulting `pseudo-detections' are used to 
identify true vs. false detections.
+See Section 3.1.5 and Figure 7 in Akhlaghi and Ichikawa (2015) for a very 
complete explanation.
 
 @item --dopening=INT
 The number of openings to do after applying @option{--dthresh}.
 
 @item --dopeningngb=INT
-The connectivity used in the opening of @option{--dopening}. In a 2D image
-this must be either 4 or 8. The stronger the connectivity, the more smaller
-regions will be discarded.
+The connectivity used in the opening of @option{--dopening}.
+In a 2D image this must be either 4 or 8.
+The stronger the connectivity, the more smaller regions will be discarded.
 
 @item --holengb=INT
-The connectivity (defined by the number of neighbors) to fill holes after
-applying @option{--dthresh} (above) to find pseudo-detections. For example
-in a 2D image it must be 4 (the neighbors that are most strongly connected)
-or 8 (all neighbors). The stronger the connectivity, the stronger the hole
-will be enclosed. So setting a value of 8 in a 2D image means that the
-walls of the hole are 4-connected. If standard (near Sky level) values are
-given to @option{--dthresh}, setting @option{--holengb=4}, might fill the
-complete dataset and thus not create enough pseudo-detections.
+The connectivity (defined by the number of neighbors) to fill holes after 
applying @option{--dthresh} (above) to find pseudo-detections.
+For example in a 2D image it must be 4 (the neighbors that are most strongly 
connected) or 8 (all neighbors).
+The stronger the connectivity, the stronger the hole will be enclosed.
+So setting a value of 8 in a 2D image means that the walls of the hole are 
4-connected.
+If standard (near Sky level) values are given to @option{--dthresh}, setting 
@option{--holengb=4}, might fill the complete dataset and thus not create 
enough pseudo-detections.
 
 @item --pseudoconcomp=INT
-The connectivity (defined by the number of neighbors) to find individual
-pseudo-detections. If it is a weaker connectivity (4 in a 2D image), then
-pseudo-detections that are connected on the corners will be treated as
-separate.
+The connectivity (defined by the number of neighbors) to find individual 
pseudo-detections.
+If it is a weaker connectivity (4 in a 2D image), then pseudo-detections that 
are connected on the corners will be treated as separate.
 
 @item -m INT
 @itemx --snminarea=INT
-The minimum area to calculate the Signal to noise ratio on the
-pseudo-detections of both the initially detected and undetected
-regions. When the area in a pseudo-detection is too small, the Signal to
-noise ratio measurements will not be accurate and their distribution will
-be heavily skewed to the positive. So it is best to ignore any
-pseudo-detection that is smaller than this area. Use
-@option{--detsnhistnbins} to check if this value is reasonable or not.
+The minimum area to calculate the Signal to noise ratio on the 
pseudo-detections of both the initially detected and undetected regions.
+When the area in a pseudo-detection is too small, the Signal to noise ratio 
measurements will not be accurate and their distribution will be heavily skewed 
to the positive.
+So it is best to ignore any pseudo-detection that is smaller than this area.
+Use @option{--detsnhistnbins} to check if this value is reasonable or not.
 
 @item --checksn
-Save the S/N values of the pseudo-detections (and possibly grown detections
-if @option{--cleangrowndet} is called) into separate tables. If
-@option{--tableformat} is a FITS table, each table will be written into a
-separate extension of one file suffixed with @file{_detsn.fits}. If it is
-plain text, a separate file will be made for each table (ending in
-@file{_detsn_sky.txt}, @file{_detsn_det.txt} and
-@file{_detsn_grown.txt}). For more on @option{--tableformat} see @ref{Input
-output options}.
-
-You can use these to inspect the S/N values and their distribution (in
-combination with the @option{--checkdetection} option to see where the
-pseudo-detections are).  You can use Gnuastro's @ref{Statistics} to make a
-histogram of the distribution or any other analysis you would like for
-better understanding of the distribution (for example through a histogram).
+Save the S/N values of the pseudo-detections (and possibly grown detections if 
@option{--cleangrowndet} is called) into separate tables.
+If @option{--tableformat} is a FITS table, each table will be written into a 
separate extension of one file suffixed with @file{_detsn.fits}.
+If it is plain text, a separate file will be made for each table (ending in 
@file{_detsn_sky.txt}, @file{_detsn_det.txt} and @file{_detsn_grown.txt}).
+For more on @option{--tableformat} see @ref{Input output options}.
+
+You can use these to inspect the S/N values and their distribution (in 
combination with the @option{--checkdetection} option to see where the 
pseudo-detections are).
+You can use Gnuastro's @ref{Statistics} to make a histogram of the 
distribution or any other analysis you would like for better understanding of 
the distribution (for example through a histogram).
 
 @item --minnumfalse=INT
-The minimum number of `pseudo-detections' over the undetected regions to
-identify a Signal-to-Noise ratio threshold. The Signal to noise ratio (S/N)
-of false pseudo-detections in each tile is found using the quantile of the
-S/N distribution of the pseudo-detections over the undetected pixels in each
-mesh. If the number of S/N measurements is not large enough, the quantile
-will not be accurate (can have large scatter). For example if you set
-@option{--snquant=0.99} (or the top 1 percent), then it is best to have at
-least 100 S/N measurements.
+The minimum number of `pseudo-detections' over the undetected regions to 
identify a Signal-to-Noise ratio threshold.
+The Signal to noise ratio (S/N) of false pseudo-detections in each tile is 
found using the quantile of the S/N distribution of the pseudo-detections over 
the undetected pixels in each mesh.
+If the number of S/N measurements is not large enough, the quantile will not 
be accurate (can have large scatter).
+For example if you set @option{--snquant=0.99} (or the top 1 percent), then it 
is best to have at least 100 S/N measurements.
 
 @item -c FLT
 @itemx --snquant=FLT
-The quantile of the Signal to noise ratio distribution of the
-pseudo-detections in each mesh to use for filling the large mesh grid. Note
-that this is only calculated for the large mesh grids that satisfy the
-minimum fraction of undetected pixels (value of @option{--minbfrac}) and
-minimum number of pseudo-detections (value of @option{--minnumfalse}).
+The quantile of the Signal to noise ratio distribution of the 
pseudo-detections in each mesh to use for filling the large mesh grid.
+Note that this is only calculated for the large mesh grids that satisfy the 
minimum fraction of undetected pixels (value of @option{--minbfrac}) and 
minimum number of pseudo-detections (value of @option{--minnumfalse}).
 
 @item --snthresh=FLT
-Manually set the signal-to-noise ratio of true pseudo-detections. With this
-option, NoiseChisel will not attempt to find pseudo-detections over the
-noisy regions of the dataset, but will directly go onto applying the
-manually input value.
+Manually set the signal-to-noise ratio of true pseudo-detections.
+With this option, NoiseChisel will not attempt to find pseudo-detections over 
the noisy regions of the dataset, but will directly go onto applying the 
manually input value.
 
-This option is useful in crowded images where there is no blank sky to find
-the sky pseudo-detections. You can get this value on a similarly reduced
-dataset (from another region of the Sky with more undetected regions
-spaces).
+This option is useful in crowded images where there is no blank sky to find 
the sky pseudo-detections.
+You can get this value on a similarly reduced dataset (from another region of 
the Sky with more undetected regions spaces).
 
 @item -d FLT
 @itemx --detgrowquant=FLT
-Quantile limit to ``grow'' the final detections. As discussed in the
-previous options, after applying the initial quantile threshold, layers of
-pixels are carved off the objects to identify true signal. With this step
-you can return those low surface brightness layers that were carved off
-back to the detections. To disable growth, set the value of this option to
-@code{1}.
-
-The process is as follows: after the true detections are found, all the
-non-detected pixels above this quantile will be put in a list and used to
-``grow'' the true detections (seeds of the growth). Like all quantile
-thresholds, this threshold is defined and applied to the convolved
-dataset. Afterwards, the dataset is dilated once (with minimum
-connectivity) to connect very thin regions on the boundary: imagine
-building a dam at the point rivers spill into an open sea/ocean. Finally,
-all holes are filled. In the geography metaphor, holes can be seen as the
-closed (by the dams) rivers and lakes, so this process is like turning the
-water in all such rivers and lakes into soil. See
-@option{--detgrowmaxholesize} for configuring the hole filling.
+Quantile limit to ``grow'' the final detections.
+As discussed in the previous options, after applying the initial quantile 
threshold, layers of pixels are carved off the objects to identify true signal.
+With this step you can return those low surface brightness layers that were 
carved off back to the detections.
+To disable growth, set the value of this option to @code{1}.
+
+The process is as follows: after the true detections are found, all the 
non-detected pixels above this quantile will be put in a list and used to 
``grow'' the true detections (seeds of the growth).
+Like all quantile thresholds, this threshold is defined and applied to the 
convolved dataset.
+Afterwards, the dataset is dilated once (with minimum connectivity) to connect 
very thin regions on the boundary: imagine building a dam at the point rivers 
spill into an open sea/ocean.
+Finally, all holes are filled.
+In the geography metaphor, holes can be seen as the closed (by the dams) 
rivers and lakes, so this process is like turning the water in all such rivers 
and lakes into soil.
+See @option{--detgrowmaxholesize} for configuring the hole filling.
 
 Note that since the growth occurs on all neighbors of a data element, the
 quantile for 3D detection must be must larger than that of 2D
@@ -18122,67 +13318,46 @@ detection. Recall that in 2D each element has 8 
neighbors while in 3D there
 are 27 neighbors.
 
 @item --detgrowmaxholesize=INT
-The maximum hole size to fill during the final expansion of the true
-detections as described in @option{--detgrowquant}. This is necessary when
-the input contains many smaller objects and can be used to avoid marking
-blank sky regions as detections.
-
-For example multiple galaxies can be positioned such that they surround an
-empty region of sky. If all the holes are filled, the Sky region in between
-them will be taken as a detection which is not desired. To avoid such
-cases, the integer given to this option must be smaller than the hole
-between such objects. However, we should caution that unless the ``hole''
-is very large, the combined faint wings of the galaxies might actually be
-present in between them, so be very careful in not filling such holes.
-
-On the other hand, if you have a very large (and extended) galaxy, the
-diffuse wings of the galaxy may create very large holes over the
-detections. In such cases, a large enough value to this option will cause
-all such holes to be detected as part of the large galaxy and thus help in
-detecting it to extremely low surface brightness limits. Therefore,
-especially when large and extended objects are present in the image, it is
-recommended to give this option (very) large values. For one real-world
-example, see @ref{Detecting large extended targets}.
+The maximum hole size to fill during the final expansion of the true 
detections as described in @option{--detgrowquant}.
+This is necessary when the input contains many smaller objects and can be used 
to avoid marking blank sky regions as detections.
+
+For example multiple galaxies can be positioned such that they surround an 
empty region of sky.
+If all the holes are filled, the Sky region in between them will be taken as a 
detection which is not desired.
+To avoid such cases, the integer given to this option must be smaller than the 
hole between such objects.
+However, we should caution that unless the ``hole'' is very large, the 
combined faint wings of the galaxies might actually be present in between them, 
so be very careful in not filling such holes.
+
+On the other hand, if you have a very large (and extended) galaxy, the diffuse 
wings of the galaxy may create very large holes over the detections.
+In such cases, a large enough value to this option will cause all such holes 
to be detected as part of the large galaxy and thus help in detecting it to 
extremely low surface brightness limits.
+Therefore, especially when large and extended objects are present in the 
image, it is recommended to give this option (very) large values.
+For one real-world example, see @ref{Detecting large extended targets}.
 
 @item --cleangrowndet
-After dilation, if the signal-to-noise ratio of a detection is less than
-the derived pseudo-detection S/N limit, that detection will be
-discarded. In an ideal/clean noise, a true detection's S/N should be larger
-than its constituent pseudo-detections because its area is larger and it
-also covers more signal. However, on a false detections (especially at
-lower @option{--snquant} values), the increase in size can cause a decrease
-in S/N below that threshold.
-
-This will improve purity and not change completeness (a true detection will
-not be discarded). Because a true detection has flux in its vicinity and
-dilation will catch more of that flux and increase the S/N. So on a true
-detection, the final S/N cannot be less than pseudo-detections.
-
-However, in many real images bad processing creates artifacts that cannot
-be accurately removed by the Sky subtraction. In such cases, this option
-will decrease the completeness (will artificially discard true
-detections). So this feature is not default and should to be explicitly
-called when you know the noise is clean.
+After dilation, if the signal-to-noise ratio of a detection is less than the 
derived pseudo-detection S/N limit, that detection will be discarded.
+In an ideal/clean noise, a true detection's S/N should be larger than its 
constituent pseudo-detections because its area is larger and it also covers 
more signal.
+However, on a false detections (especially at lower @option{--snquant} 
values), the increase in size can cause a decrease in S/N below that threshold.
+
+This will improve purity and not change completeness (a true detection will 
not be discarded).
+Because a true detection has flux in its vicinity and dilation will catch more 
of that flux and increase the S/N.
+So on a true detection, the final S/N cannot be less than pseudo-detections.
+
+However, in many real images bad processing creates artifacts that cannot be 
accurately removed by the Sky subtraction.
+In such cases, this option will decrease the completeness (will artificially 
discard true detections).
+So this feature is not default and should to be explicitly called when you 
know the noise is clean.
 
 
 @item --checkdetection
-Every step of the detection process will be added as an extension to a file
-with the suffix @file{_det.fits}. Going through each would just be a repeat
-of the explanations above and also of those in Akhlaghi and Ichikawa
-(2015). The extension label should be sufficient to recognize which step
-you are observing. Viewing all the steps can be the best guide in choosing
-the best set of parameters. With this option, NoiseChisel will abort as
-soon as a snapshot of all the detection process is saved. This behavior can
-be disabled with @option{--continueaftercheck}.
+Every step of the detection process will be added as an extension to a file 
with the suffix @file{_det.fits}.
+Going through each would just be a repeat of the explanations above and also 
of those in Akhlaghi and Ichikawa (2015).
+The extension label should be sufficient to recognize which step you are 
observing.
+Viewing all the steps can be the best guide in choosing the best set of 
parameters.
+With this option, NoiseChisel will abort as soon as a snapshot of all the 
detection process is saved.
+This behavior can be disabled with @option{--continueaftercheck}.
 
 @item --checksky
-Check the derivation of the final sky and its standard deviation values on
-the mesh grid. With this option, NoiseChisel will abort as soon as the sky
-value is estimated over the image (on each tile). This behavior can be
-disabled with @option{--continueaftercheck}. By default the output will
-have the same pixel size as the input, but with the
-@option{--oneelempertile} option, only one pixel will be used for each tile
-(see @ref{Processing options}).
+Check the derivation of the final sky and its standard deviation values on the 
mesh grid.
+With this option, NoiseChisel will abort as soon as the sky value is estimated 
over the image (on each tile).
+This behavior can be disabled with @option{--continueaftercheck}.
+By default the output will have the same pixel size as the input, but with the 
@option{--oneelempertile} option, only one pixel will be used for each tile 
(see @ref{Processing options}).
 
 @end table
 
@@ -18193,136 +13368,82 @@ have the same pixel size as the input, but with the
 @node NoiseChisel output,  , Detection options, Invoking astnoisechisel
 @subsubsection NoiseChisel output
 
-NoiseChisel's output is a multi-extension FITS file. The main
-extension/dataset is a (binary) detection map. It has the same size as the
-input but with only two possible values for all pixels: 0 (for pixels
-identified as noise) and 1 (for those identified as signal/detections). The
-detection map is followed by a Sky and Sky standard deviation dataset
-(which are calculated from the binary image). By default (when
-@option{--rawoutput} isn't called), NoiseChisel will also subtract the Sky
-value from the input and save the sky-subtracted input as the first
-extension in the output with data. The zero-th extension (that contains no
-data), contains NoiseChisel's configuration as FITS keywords, see
-@ref{Output FITS files}.
+NoiseChisel's output is a multi-extension FITS file.
+The main extension/dataset is a (binary) detection map.
+It has the same size as the input but with only two possible values for all 
pixels: 0 (for pixels identified as noise) and 1 (for those identified as 
signal/detections).
+The detection map is followed by a Sky and Sky standard deviation dataset 
(which are calculated from the binary image).
+By default (when @option{--rawoutput} isn't called), NoiseChisel will also 
subtract the Sky value from the input and save the sky-subtracted input as the 
first extension in the output with data.
+The zero-th extension (that contains no data), contains NoiseChisel's 
configuration as FITS keywords, see @ref{Output FITS files}.
+
+The name of the output file can be set by giving a value to @option{--output} 
(this is a common option between all programs and is therefore discussed in 
@ref{Input output options}).
+If @option{--output} isn't used, the input name will be suffixed with 
@file{_detected.fits} and used as output, see @ref{Automatic output}.
+If any of the options starting with @option{--check*} are given, NoiseChisel 
won't complete and will abort as soon as the respective check images are 
created.
+For more information on the different check images, see the description for 
the @option{--check*} options in @ref{Detection options} (this can be disabled 
with @option{--continueaftercheck}).
 
-The name of the output file can be set by giving a value to
-@option{--output} (this is a common option between all programs and is
-therefore discussed in @ref{Input output options}). If @option{--output}
-isn't used, the input name will be suffixed with @file{_detected.fits} and
-used as output, see @ref{Automatic output}. If any of the options starting
-with @option{--check*} are given, NoiseChisel won't complete and will abort
-as soon as the respective check images are created. For more information on
-the different check images, see the description for the @option{--check*}
-options in @ref{Detection options} (this can be disabled with
-@option{--continueaftercheck}).
-
-The last two extensions of the output are the Sky and its Standard
-deviation, see @ref{Sky value} for a complete explanation. They are
-calculated on the tile grid that you defined for NoiseChisel. By default
-these datasets will have the same size as the input, but with all the
-pixels in one tile given one value. To be more space-efficient (keep only
-one pixel per tile), you can use the @option{--oneelempertile} option, see
-@ref{Tessellation}.
+The last two extensions of the output are the Sky and its Standard deviation, 
see @ref{Sky value} for a complete explanation.
+They are calculated on the tile grid that you defined for NoiseChisel.
+By default these datasets will have the same size as the input, but with all 
the pixels in one tile given one value.
+To be more space-efficient (keep only one pixel per tile), you can use the 
@option{--oneelempertile} option, see @ref{Tessellation}.
 
 @cindex GNOME
-To visually inspect any of NoiseChisel's output files, assuming you use SAO
-DS9, you can configure your Graphic User Interface (GUI) to open
-NoiseChisel's output as a multi-extension data cube. This will allow you to
-flip through the different extensions and visually inspect the
-results. This process has been described for the GNOME GUI (most common GUI
-in GNU/Linux operating systems) in @ref{Viewing multiextension FITS
-images}. NoiseChisel's output configuration options are described in detail
-below.
+To inspect any of NoiseChisel's output files, assuming you use SAO DS9, you 
can configure your Graphic User Interface (GUI) to open NoiseChisel's output as 
a multi-extension data cube.
+This will allow you to flip through the different extensions and visually 
inspect the results.
+This process has been described for the GNOME GUI (most common GUI in 
GNU/Linux operating systems) in @ref{Viewing multiextension FITS images}.
+
+NoiseChisel's output configuration options are described in detail below.
 
 @table @option
 @item --continueaftercheck
-Continue NoiseChisel after any of the options starting with
-@option{--check} (see @ref{Detection options}. NoiseChisel involves many
-steps and as a result, there are many checks, allowing you to inspect the
-status of the processing. The results of each step affect the next steps of
-processing. Therefore, when you want to check the status of the processing
-at one step, the time spent to complete NoiseChisel is just
-wasted/distracting time.
-
-To encourage easier experimentation with the option values, when you use
-any of the NoiseChisel options that start with @option{--check},
-NoiseChisel will abort once its desired extensions have been written. With
-@option{--continueaftercheck} option, you can disable this behavior and ask
-NoiseChisel to continue with the rest of the processing, even after the
-requested check files are complete.
+Continue NoiseChisel after any of the options starting with @option{--check} 
(see @ref{Detection options}.
+NoiseChisel involves many steps and as a result, there are many checks, 
allowing you to inspect the status of the processing.
+The results of each step affect the next steps of processing.
+Therefore, when you want to check the status of the processing at one step, 
the time spent to complete NoiseChisel is just wasted/distracting time.
+
+To encourage easier experimentation with the option values, when you use any 
of the NoiseChisel options that start with @option{--check}, NoiseChisel will 
abort once its desired extensions have been written.
+With @option{--continueaftercheck} option, you can disable this behavior and 
ask NoiseChisel to continue with the rest of the processing, even after the 
requested check files are complete.
 
 @item --ignoreblankintiles
-Don't set the input's blank pixels to blank in the tiled outputs (for
-example Sky and Sky standard deviation extensions of the output). This is
-only applicable when the tiled output has the same size as the input, in
-other words, when @option{--oneelempertile} isn't called.
+Don't set the input's blank pixels to blank in the tiled outputs (for example 
Sky and Sky standard deviation extensions of the output).
+This is only applicable when the tiled output has the same size as the input, 
in other words, when @option{--oneelempertile} isn't called.
 
-By default, blank values in the input (commonly on the edges which are
-outside the survey/field area) will be set to blank in the tiled outputs
-also. But in other scenarios this default behavior is not desired: for
-example if you have masked something in the input, but want the tiled
-output under that also.
+By default, blank values in the input (commonly on the edges which are outside 
the survey/field area) will be set to blank in the tiled outputs also.
+But in other scenarios this default behavior is not desired: for example if 
you have masked something in the input, but want the tiled output under that 
also.
 
 @item -l
 @itemx --label
-Run a connected-components algorithm on the finally detected pixels to
-identify which pixels are connected to which. By default the main output is
-a binary dataset with only two values: 0 (for noise) and 1 (for
-signal/detections). See @ref{NoiseChisel output} for more.
-
-The purpose of NoiseChisel is to detect targets that are extended and
-diffuse, with outer parts that sink into the noise very gradually (galaxies
-and stars for example). Since NoiseChisel digs down to extremely low
-surface brightness values, many such targets will commonly be detected
-together as a single large body of connected pixels.
-
-To properly separate connected objects, sophisticated segmentation methods
-are commonly necessary on NoiseChisel's output. Gnuastro has the dedicated
-@ref{Segment} program for this job. Since input images are commonly large
-and can take a significant volume, the extra volume necessary to store the
-labels of the connected components in the detection map (which will be
-created with this @option{--label} option, in 32-bit signed integer type)
-can thus be a major waste of space. Since the default output is just a
-binary dataset, an 8-bit unsigned dataset is enough.
-
-The binary output will also encourage users to segment the result
-separately prior to doing higher-level analysis. As an alternative to
-@option{--label}, if you have the binary detection image, you can use the
-@code{connected-components} operator in Gnuastro's Arithmetic program to
-identify regions that are connected with each other. For example with this
-command (assuming NoiseChisel's output is called @file{nc.fits}):
+Run a connected-components algorithm on the finally detected pixels to 
identify which pixels are connected to which.
+By default the main output is a binary dataset with only two values: 0 (for 
noise) and 1 (for signal/detections).
+See @ref{NoiseChisel output} for more.
+
+The purpose of NoiseChisel is to detect targets that are extended and diffuse, 
with outer parts that sink into the noise very gradually (galaxies and stars 
for example).
+Since NoiseChisel digs down to extremely low surface brightness values, many 
such targets will commonly be detected together as a single large body of 
connected pixels.
+
+To properly separate connected objects, sophisticated segmentation methods are 
commonly necessary on NoiseChisel's output.
+Gnuastro has the dedicated @ref{Segment} program for this job.
+Since input images are commonly large and can take a significant volume, the 
extra volume necessary to store the labels of the connected components in the 
detection map (which will be created with this @option{--label} option, in 
32-bit signed integer type) can thus be a major waste of space.
+Since the default output is just a binary dataset, an 8-bit unsigned dataset 
is enough.
+
+The binary output will also encourage users to segment the result separately 
prior to doing higher-level analysis.
+As an alternative to @option{--label}, if you have the binary detection image, 
you can use the @code{connected-components} operator in Gnuastro's Arithmetic 
program to identify regions that are connected with each other.
+For example with this command (assuming NoiseChisel's output is called 
@file{nc.fits}):
 
 @example
 $ astarithmetic nc.fits 2 connected-components -hDETECTIONS
 @end example
 
 @item --rawoutput
-Don't include the Sky-subtracted input image as the first extension of the
-output. By default, the Sky-subtracted input is put in the first extension
-of the output. The next extensions are NoiseChisel's main outputs described
-above.
+Don't include the Sky-subtracted input image as the first extension of the 
output.
+By default, the Sky-subtracted input is put in the first extension of the 
output.
+The next extensions are NoiseChisel's main outputs described above.
+
+The extra Sky-subtracted input can be convenient in checking NoiseChisel's 
output and comparing the detection map with the input: visually see if 
everything you expected is detected (reasonable completeness) and that you 
don't have too many false detections (reasonable purity).
+This visual inspection is simplified if you use SAO DS9 to view NoiseChisel's 
output as a multi-extension data-cube, see @ref{Viewing multiextension FITS 
images}.
+
+When you are satisfied with your NoiseChisel configuration (therefore you 
don't need to check on every run), or you want to archive/transfer the outputs, 
or the datasets become large, or you are running NoiseChisel as part of a 
pipeline, this Sky-subtracted input image can be a significant burden (take up 
a large volume).
+The fact that the input is also noisy, makes it hard to compress it 
efficiently.
 
-The extra Sky-subtracted input can be convenient in checking NoiseChisel's
-output and comparing the detection map with the input: visually see if
-everything you expected is detected (reasonable completeness) and that you
-don't have too many false detections (reasonable purity). This visual
-inspection is simplified if you use SAO DS9 to view NoiseChisel's output as
-a multi-extension data-cube, see @ref{Viewing multiextension FITS images}.
-
-When you are satisfied with your NoiseChisel configuration (therefore you
-don't need to check on every run), or you want to archive/transfer the
-outputs, or the datasets become large, or you are running NoiseChisel as
-part of a pipeline, this Sky-subtracted input image can be a significant
-burden (take up a large volume). The fact that the input is also noisy,
-makes it hard to compress it efficiently.
-
-In such cases, this @option{--rawoutput} can be used to avoid the extra
-sky-subtracted input in the output. It is always possible to easily produce
-the Sky-subtracted dataset from the input (assuming it is in extension
-@code{1} of @file{in.fits}) and the @code{SKY} extension of NoiseChisel's
-output (let's call it @file{nc.fits}) with a command like below (assuming
-NoiseChisel wasn't run with @option{--oneelempertile}, see
-@ref{Tessellation}):
+In such cases, this @option{--rawoutput} can be used to avoid the extra 
sky-subtracted input in the output.
+It is always possible to easily produce the Sky-subtracted dataset from the 
input (assuming it is in extension @code{1} of @file{in.fits}) and the 
@code{SKY} extension of NoiseChisel's output (let's call it @file{nc.fits}) 
with a command like below (assuming NoiseChisel wasn't run with 
@option{--oneelempertile}, see @ref{Tessellation}):
 
 @example
 $ astarithmetic in.fits nc.fits - -h1 -hSKY
@@ -18332,14 +13453,10 @@ $ astarithmetic in.fits nc.fits - -h1 -hSKY
 @cartouche
 @noindent
 @cindex Compression
-@strong{Save space:} with the @option{--rawoutput} and
-@option{--oneelempertile}, NoiseChisel's output will only be one binary
-detection map and two much smaller arrays with one value per tile. Since
-none of these have noise they can be compressed very effectively (without
-any loss of data) with exceptionally high compression ratios. This makes it
-easy to archive, or transfer, NoiseChisel's output even on huge
-datasets. To compress it with the most efficient method (take up less
-volume), run the following command:
+@strong{Save space:} with the @option{--rawoutput} and 
@option{--oneelempertile}, NoiseChisel's output will only be one binary 
detection map and two much smaller arrays with one value per tile.
+Since none of these have noise they can be compressed very effectively 
(without any loss of data) with exceptionally high compression ratios.
+This makes it easy to archive, or transfer, NoiseChisel's output even on huge 
datasets.
+To compress it with the most efficient method (take up less volume), run the 
following command:
 
 @cindex GNU Gzip
 @example
@@ -18347,10 +13464,8 @@ $ gzip --best noisechisel_output.fits
 @end example
 
 @noindent
-The resulting @file{.fits.gz} file can then be fed into any of Gnuastro's
-programs directly, or viewed in viewers like SAO DS9, without having to
-decompress it separately (they will just take a little longer, because they
-have to internally decompress it before starting).
+The resulting @file{.fits.gz} file can then be fed into any of Gnuastro's 
programs directly, or viewed in viewers like SAO DS9, without having to 
decompress it separately (they will just take a little longer, because they 
have to internally decompress it before starting).
+See @ref{NoiseChisel optimization for storage} for an example on a real 
dataset.
 @end cartouche
 
 
@@ -18365,27 +13480,16 @@ have to internally decompress it before starting).
 @node Segment, MakeCatalog, NoiseChisel, Data analysis
 @section Segment
 
-Once signal is separated from noise (for example with @ref{NoiseChisel}),
-you have a binary dataset: each pixel is either signal (1) or noise
-(0). Signal (for example every galaxy in your image) has been ``detected'',
-but all detections have a label of 1. Therefore while we know which pixels
-contain signal, we still can't find out how many galaxies they contain or
-which detected pixels correspond to which galaxy. At the lowest (most
-generic) level, detection is a kind of segmentation (segmenting the the
-whole dataset into signal and noise, see @ref{NoiseChisel}). Here, we'll
-define segmentation only on signal: to separate and find sub-structure
-within the detections.
+Once signal is separated from noise (for example with @ref{NoiseChisel}), you 
have a binary dataset: each pixel is either signal (1) or noise (0).
+Signal (for example every galaxy in your image) has been ``detected'', but all 
detections have a label of 1.
+Therefore while we know which pixels contain signal, we still can't find out 
how many galaxies they contain or which detected pixels correspond to which 
galaxy.
+At the lowest (most generic) level, detection is a kind of segmentation 
(segmenting the whole dataset into signal and noise, see @ref{NoiseChisel}).
+Here, we'll define segmentation only on signal: to separate and find 
sub-structure within the detections.
 
 @cindex Connected component labeling
-If the targets are clearly separated, or their detected regions aren't
-touching, a simple connected
-components@footnote{@url{https://en.wikipedia.org/wiki/Connected-component_labeling}}
-algorithm (very basic segmentation) is enough to separate the regions that
-are touching/connected. This is such a basic and simple form of
-segmentation that Gnuastro's Arithmetic program has an operator for it: see
-@code{connected-components} in @ref{Arithmetic operators}. Assuming the
-binary dataset is called @file{binary.fits}, you can use it with a command
-like this:
+If the targets are clearly separated, or their detected regions aren't 
touching, a simple connected 
components@footnote{@url{https://en.wikipedia.org/wiki/Connected-component_labeling}}
 algorithm (very basic segmentation) is enough to separate the regions that are 
touching/connected.
+This is such a basic and simple form of segmentation that Gnuastro's 
Arithmetic program has an operator for it: see @code{connected-components} in 
@ref{Arithmetic operators}.
+Assuming the binary dataset is called @file{binary.fits}, you can use it with 
a command like this:
 
 @example
 $ astarithmetic binary.fits 2 connected-components
@@ -18400,62 +13504,38 @@ like below:
 $ astarithmetic in.fits 100 gt 2 connected-components
 @end example
 
-However, in most astronomical situations our targets are not nicely
-separated or have a sharp boundary/edge (for a threshold to suffice): they
-touch (for example merging galaxies), or are simply in the same
-line-of-sight (which is much more common). This causes their images to
-overlap.
-
-In particular, when you do your detection with NoiseChisel, you will detect
-signal to very low surface brightness limits: deep into the faint wings of
-galaxies or bright stars (which can extend very far and irregularly from
-their center). Therefore, it often happens that several galaxies are
-detected as one large detection. Since they are touching, a simple
-connected components algorithm will not suffice. It is therefore necessary
-to do a more sophisticated segmentation and break up the detected pixels
-(even those that are touching) into multiple target objects as accurately
-as possible.
-
-Segment will use a detection map and its corresponding dataset to find
-sub-structure over the detected areas and use them for its
-segmentation. Until Gnuastro version 0.6 (released in 2018), Segment was
-part of @ref{NoiseChisel}. Therefore, similar to NoiseChisel, the best
-place to start reading about Segment and understanding what it does (with
-many illustrative figures) is Section 3.2 of
-@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}.
+However, in most astronomical situations our targets are not nicely separated 
or have a sharp boundary/edge (for a threshold to suffice): they touch (for 
example merging galaxies), or are simply in the same line-of-sight (which is 
much more common).
+This causes their images to overlap.
+
+In particular, when you do your detection with NoiseChisel, you will detect 
signal to very low surface brightness limits: deep into the faint wings of 
galaxies or bright stars (which can extend very far and irregularly from their 
center).
+Therefore, it often happens that several galaxies are detected as one large 
detection.
+Since they are touching, a simple connected components algorithm will not 
suffice.
+It is therefore necessary to do a more sophisticated segmentation and break up 
the detected pixels (even those that are touching) into multiple target objects 
as accurately as possible.
+
+Segment will use a detection map and its corresponding dataset to find 
sub-structure over the detected areas and use them for its segmentation.
+Until Gnuastro version 0.6 (released in 2018), Segment was part of 
@ref{NoiseChisel}.
+Therefore, similar to NoiseChisel, the best place to start reading about 
Segment and understanding what it does (with many illustrative figures) is 
Section 3.2 of @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa 
[2015]}.
 
 @cindex river
 @cindex Watershed algorithm
-As a summary, Segment first finds true @emph{clump}s over the
-detections. Clumps are associated with local maxima/minima@footnote{By
-default the maximum is used as the first clump pixel, to define clumps
-based on local minima, use the @option{--minima} option.} and extend over
-the neighboring pixels until they reach a local minimum/maximum
-(@emph{river}/@emph{watershed}). By default, Segment will use the
-distribution of clump signal-to-noise ratios over the undetected regions as
-reference to find ``true'' clumps over the detections. Using the undetected
-regions can be disabled by directly giving a signal-to-noise ratio to
-@option{--clumpsnthresh}.
-
-The true clumps are then grown to a certain threshold over the
-detections. Based on the strength of the connections (rivers/watersheds)
-between the grown clumps, they are considered parts of one @emph{object} or
-as separate @emph{object}s. See Section 3.2 of Akhlaghi and Ichikawa [2015]
-(link above) for more. Segment's main output are thus two labeled datasets:
-1) clumps, and 2) objects. See @ref{Segment output} for more.
-
-To start learning about Segment, especially in relation to detection
-(@ref{NoiseChisel}) and measurement (@ref{MakeCatalog}), the recommended
-references are @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa
-[2015]} and @url{https://arxiv.org/abs/1611.06387, Akhlaghi [2016]}.
-
-Those papers cannot be updated any more but the software will evolve. For
-example Segment became a separate program (from NoiseChisel) in 2018 (after
-those papers were published). Therefore this book is the definitive
-reference. To help in the transition from those papers to the software you
-are using, see @ref{Segment changes after publication}. Finally, in
-@ref{Invoking astsegment}, we'll discuss Segment's inputs, outputs and
-configuration options.
+As a summary, Segment first finds true @emph{clump}s over the detections.
+Clumps are associated with local maxima/minima@footnote{By default the maximum 
is used as the first clump pixel, to define clumps based on local minima, use 
the @option{--minima} option.} and extend over the neighboring pixels until 
they reach a local minimum/maximum (@emph{river}/@emph{watershed}).
+By default, Segment will use the distribution of clump signal-to-noise ratios 
over the undetected regions as reference to find ``true'' clumps over the 
detections.
+Using the undetected regions can be disabled by directly giving a 
signal-to-noise ratio to @option{--clumpsnthresh}.
+
+The true clumps are then grown to a certain threshold over the detections.
+Based on the strength of the connections (rivers/watersheds) between the grown 
clumps, they are considered parts of one @emph{object} or as separate 
@emph{object}s.
+See Section 3.2 of Akhlaghi and Ichikawa [2015] (link above) for more.
+Segment's main output are thus two labeled datasets: 1) clumps, and 2) objects.
+See @ref{Segment output} for more.
+
+To start learning about Segment, especially in relation to detection 
(@ref{NoiseChisel}) and measurement (@ref{MakeCatalog}), the recommended 
references are @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa 
[2015]} and @url{https://arxiv.org/abs/1611.06387, Akhlaghi [2016]}.
+
+Those papers cannot be updated any more but the software will evolve.
+For example Segment became a separate program (from NoiseChisel) in 2018 
(after those papers were published).
+Therefore this book is the definitive reference.
+To help in the transition from those papers to the software you are using, see 
@ref{Segment changes after publication}.
+Finally, in @ref{Invoking astsegment}, we'll discuss Segment's inputs, outputs 
and configuration options.
 
 
 @menu
@@ -18466,105 +13546,67 @@ configuration options.
 @node Segment changes after publication, Invoking astsegment, Segment, Segment
 @subsection Segment changes after publication
 
-Segment's main algorithm and working strategy were initially defined and
-introduced in Section 3.2 of @url{https://arxiv.org/abs/1505.01664,
-Akhlaghi and Ichikawa [2015]}. Prior to Gnuastro version 0.6 (released
-2018), one program (NoiseChisel) was in charge of detection @emph{and}
-segmentation. to increase creativity and modularity, NoiseChisel's
-segmentation features were spun-off into a separate program (Segment). It
-is strongly recommended to read that paper for a good understanding of what
-Segment does, how it relates to detection, and how each parameter
-influences the output. That paper has a large number of figures showing
-every step on multiple mock and real examples.
-
-However, the paper cannot be updated anymore, but Segment has evolved (and
-will continue to do so): better algorithms or steps have been (and will be)
-found. This book is thus the final and definitive guide to Segment. The aim
-of this section is to make the transition from the paper to your installed
-version, as smooth as possible through the list below. For a more detailed
-list of changes in previous Gnuastro releases/versions, please follow the
-@file{NEWS} file@footnote{The @file{NEWS} file is present in the released
-Gnuastro tarball, see @ref{Release tarball}.}.
+Segment's main algorithm and working strategy were initially defined and 
introduced in Section 3.2 of @url{https://arxiv.org/abs/1505.01664, Akhlaghi 
and Ichikawa [2015]}.
+Prior to Gnuastro version 0.6 (released 2018), one program (NoiseChisel) was 
in charge of detection @emph{and} segmentation.
+To increase creativity and modularity, NoiseChisel's segmentation features 
were spun-off into a separate program (Segment).
+It is strongly recommended to read that paper for a good understanding of what 
Segment does, how it relates to detection, and how each parameter influences 
the output.
+That paper has a large number of figures showing every step on multiple mock 
and real examples.
+
+However, the paper cannot be updated anymore, but Segment has evolved (and 
will continue to do so): better algorithms or steps have been (and will be) 
found.
+This book is thus the final and definitive guide to Segment.
+The aim of this section is to make the transition from the paper to your 
installed version, as smooth as possible through the list below.
+For a more detailed list of changes in previous Gnuastro releases/versions, 
please follow the @file{NEWS} file@footnote{The @file{NEWS} file is present in 
the released Gnuastro tarball, see @ref{Release tarball}.}.
 
 @itemize
 
 @item
-Since the spin-off from NoiseChisel, the default kernel to smooth the input
-for convolution has a FWHM of 1.5 pixels (still a Gaussian). This is
-slightly less than NoiseChisel's default kernel (which has a FWHM of 2
-pixels). This enables the better detection of sharp clumps: as the kernel
-gets wider, the lower signal-to-noise (but sharp/small) clumps will be
-washed away into the noise. You can use MakeProfiles to build your own
-kernel if this is too sharp/wide for your purpose. For more, see the
-@option{--kernel} option in @ref{Segment input}.
-
-The ability to use a different convolution kernel for detection and
-segmentation is one example of how separating detection from segmentation
-into separate programs can increase creativity. In detection, you want to
-detect the diffuse and extended emission, but in segmentation, you want to
-detect sharp peaks.
+Since the spin-off from NoiseChisel, the default kernel to smooth the input 
for convolution has a FWHM of 1.5 pixels (still a Gaussian).
+This is slightly less than NoiseChisel's default kernel (which has a FWHM of 2 
pixels).
+This enables the better detection of sharp clumps: as the kernel gets wider, 
the lower signal-to-noise (but sharp/small) clumps will be washed away into the 
noise.
+You can use MakeProfiles to build your own kernel if this is too sharp/wide 
for your purpose.
+For more, see the @option{--kernel} option in @ref{Segment input}.
+
+The ability to use a different convolution kernel for detection and 
segmentation is one example of how separating detection from segmentation into 
separate programs can increase creativity.
+In detection, you want to detect the diffuse and extended emission, but in 
segmentation, you want to detect sharp peaks.
 
 @item
-The criteria to select true from false clumps is the peak significance. It
-is defined to be the difference between the clump's peak value
-(@mymath{C_c}) and the highest valued river pixel around that clump
-(@mymath{R_c}). Both are calculated on the convolved image (signified by
-the @mymath{c} subscript). To avoid absolute values (differing from dataset
-to dataset), @mymath{C_c-R_c} is then divided by the Sky standard deviation
-under the river pixel used (@mymath{\sigma_r}) as shown below:
+The criteria to select true from false clumps is the peak significance.
+It is defined to be the difference between the clump's peak value 
(@mymath{C_c}) and the highest valued river pixel around that clump 
(@mymath{R_c}).
+Both are calculated on the convolved image (signified by the @mymath{c} 
subscript).
+To avoid absolute values (differing from dataset to dataset), @mymath{C_c-R_c} 
is then divided by the Sky standard deviation under the river pixel used 
(@mymath{\sigma_r}) as shown below:
 
 @dispmath{C_c-R_c\over \sigma_r}
 
-When @option{--minima} is given, the nominator becomes
-@mymath{R_c-C_c}.
-
-The input Sky standard deviation dataset (@option{--std}) is assumed to be
-for the unconvolved image. Therefore a constant factor (related to the
-convolution kernel) is necessary to convert this into an absolute peak
-significance@footnote{To get an estimate of the standard deviation
-correction factor between the input and convolved images, you can take the
-following steps: 1) Mask (set to NaN) all detections on the convolved image
-with the @code{where} operator or @ref{Arithmetic}. 2) Calculate the
-standard deviation of the undetected (non-masked) pixels of the convolved
-image with the @option{--sky} option of @ref{Statistics} (which also
-calculates the Sky standard deviation). Just make sure the tessellation
-settings of Statistics and NoiseChisel are the same (you can check with the
-@option{-P} option). 3) Divide the two standard deviation datasets to get
-the correction factor.}. As far as Segment is concerned, the absolute value
-of this correction factor is irrelevant: because it uses the ambient noise
-(undetected regions) to find the numerical threshold of this fraction and
-applies that over the detected regions.
-
-A distribution's extremum (maximum or minimum) values, used in the new
-criteria, are strongly affected by scatter. On the other hand, the
-convolved image has much less scatter@footnote{For more on the effect of
-convolution on a distribution, see Section 3.1.1 of
-@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa
-[2015]}.}. Therefore @mymath{C_c-R_c} is a more reliable (with less
-scatter) measure to identify signal than @mymath{C-R} (on the unconvolved
-image).
-
-Initially, the total clump signal-to-noise ratio of each clump was used,
-see Section 3.2.1 of @url{https://arxiv.org/abs/1505.01664, Akhlaghi and
-Ichikawa [2015]}. Therefore its completeness decreased dramatically when
-clumps were present on gradients. In tests, this measure proved to be more
-successful in detecting clumps on gradients and on flatter regions
-simultaneously.
+When @option{--minima} is given, the nominator becomes @mymath{R_c-C_c}.
+
+The input Sky standard deviation dataset (@option{--std}) is assumed to be for 
the unconvolved image.
+Therefore a constant factor (related to the convolution kernel) is necessary 
to convert this into an absolute peak significance@footnote{To get an estimate 
of the standard deviation correction factor between the input and convolved 
images, you can take the following steps:
+1) Mask (set to NaN) all detections on the convolved image with the 
@code{where} operator or @ref{Arithmetic}.
+2) Calculate the standard deviation of the undetected (non-masked) pixels of 
the convolved image with the @option{--sky} option of @ref{Statistics} (which 
also calculates the Sky standard deviation).
+Just make sure the tessellation settings of Statistics and NoiseChisel are the 
same (you can check with the @option{-P} option).
+3) Divide the two standard deviation datasets to get the correction factor.}.
+As far as Segment is concerned, the absolute value of this correction factor 
is irrelevant: because it uses the ambient noise (undetected regions) to find 
the numerical threshold of this fraction and applies that over the detected 
regions.
+
+A distribution's extremum (maximum or minimum) values, used in the new 
criteria, are strongly affected by scatter.
+On the other hand, the convolved image has much less scatter@footnote{For more 
on the effect of convolution on a distribution, see Section 3.1.1 of 
@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}.}.
+Therefore @mymath{C_c-R_c} is a more reliable (with less scatter) measure to 
identify signal than @mymath{C-R} (on the unconvolved image).
+
+Initially, the total clump signal-to-noise ratio of each clump was used, see 
Section 3.2.1 of @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa 
[2015]}.
+Therefore its completeness decreased dramatically when clumps were present on 
gradients.
+In tests, this measure proved to be more successful in detecting clumps on 
gradients and on flatter regions simultaneously.
 
 @item
-With the new @option{--minima} option, it is now possible to detect inverse
-clumps (for example absorption features). In such cases, the clump should
-be built from its smallest value.
+With the new @option{--minima} option, it is now possible to detect inverse 
clumps (for example absorption features).
+In such cases, the clump should be built from its smallest value.
 @end itemize
 
 
 @node Invoking astsegment,  , Segment changes after publication, Segment
 @subsection Invoking Segment
 
-Segment will identify substructure within the detected regions of an input
-image. Segment's output labels can be directly used for measurements (for
-example with @ref{MakeCatalog}). The executable name is @file{astsegment}
-with the following general template
+Segment will identify substructure within the detected regions of an input 
image.
+Segment's output labels can be directly used for measurements (for example 
with @ref{MakeCatalog}).
+The executable name is @file{astsegment} with the following general template
 
 @example
 $ astsegment [OPTION ...] InputImage.fits
@@ -18593,30 +13635,18 @@ $ astsegment in.fits --std=0.01 --detection=all 
--clumpsnthresh=10
 
 @cindex Gaussian
 @noindent
-If Segment is to do processing (for example you don't want to get help, or
-see the values of each option), at least one input dataset is necessary
-along with detection and error information, either as separate datasets
-(per-pixel) or fixed values, see @ref{Segment input}. Segment shares a
-large set of common operations with other Gnuastro programs, mainly
-regarding input/output, general processing steps, and general operating
-modes. To help in a unified experience between all of Gnuastro's programs,
-these common operations have the same names and defined in @ref{Common
-options}.
+If Segment is to do processing (for example you don't want to get help, or see 
the values of each option), at least one input dataset is necessary along with 
detection and error information, either as separate datasets (per-pixel) or 
fixed values, see @ref{Segment input}.
+Segment shares a large set of common operations with other Gnuastro programs, 
mainly regarding input/output, general processing steps, and general operating 
modes.
+To help in a unified experience between all of Gnuastro's programs, these 
common operations have the same names and defined in @ref{Common options}.
 
-As in all Gnuastro programs, options can also be given to Segment in
-configuration files. For a thorough description of Gnuastro's configuration
-file parsing, please see @ref{Configuration files}. All of Segment's
-options with a short description are also always available on the
-command-line with the @option{--help} option, see @ref{Getting help}. To
-inspect the option values without actually running Segment, append your
-command with @option{--printparams} (or @option{-P}).
-
-To help in easy navigation between Segment's options, they are separately
-discussed in the three sub-sections below: @ref{Segment input} discusses
-how you can customize the inputs to Segment. @ref{Segmentation options} is
-devoted to options specific to the high-level segmentation
-process. Finally, in @ref{Segment output}, we'll discuss options that
-affect Segment's output.
+As in all Gnuastro programs, options can also be given to Segment in 
configuration files.
+For a thorough description of Gnuastro's configuration file parsing, please 
see @ref{Configuration files}.
+All of Segment's options with a short description are also always available on 
the command-line with the @option{--help} option, see @ref{Getting help}.
+To inspect the option values without actually running Segment, append your 
command with @option{--printparams} (or @option{-P}).
+
+To help in easy navigation between Segment's options, they are separately 
discussed in the three sub-sections below: @ref{Segment input} discusses how 
you can customize the inputs to Segment.
+@ref{Segmentation options} is devoted to options specific to the high-level 
segmentation process.
+Finally, in @ref{Segment output}, we'll discuss options that affect Segment's 
output.
 
 @menu
 * Segment input::               Input files and options.
@@ -18627,207 +13657,141 @@ affect Segment's output.
 @node Segment input, Segmentation options, Invoking astsegment, Invoking 
astsegment
 @subsubsection Segment input
 
-Besides the input dataset (for example astronomical image), Segment also
-needs to know the Sky standard deviation and the regions of the dataset
-that it should segment. The values dataset is assumed to be Sky subtracted
-by default. If it isn't, you can ask Segment to subtract the Sky internally
-by calling @option{--sky}. For the rest of this discussion, we'll assume it
-is already sky subtracted.
-
-The Sky and its standard deviation can be a single value (to be used for
-the whole dataset) or a separate dataset (for a separate value per
-pixel). If a dataset is used for the Sky and its standard deviation, they
-must either be the size of the input image, or have a single value per tile
-(generated with @option{--oneelempertile}, see @ref{Processing options} and
-@ref{Tessellation}).
-
-The detected regions/pixels can be specified as a detection map (for
-example see @ref{NoiseChisel output}). If @option{--detection=all}, Segment
-won't read any detection map and assume the whole input is a single
-detection. For example when the dataset is fully covered by a large nearby
-galaxy/globular cluster.
-
-When dataset are to be used for any of the inputs, Segment will assume they
-are multiple extensions of a single file by default (when @option{--std} or
-@option{--detection} aren't called). For example NoiseChisel's default
-output @ref{NoiseChisel output}. When the Sky-subtracted values are in one
-file, and the detection and Sky standard deviation are in another, you just
-need to use @option{--detection}: in the absence of @option{--std}, Segment
-will look for both the detection labels and Sky standard deviation in the
-file given to @option{--detection}. Ultimately, if all three are in
-separate files, you need to call both @option{--detection} and
-@option{--std}.
-
-The extensions of the three mandatory inputs can be specified with
-@option{--hdu}, @option{--dhdu}, and @option{--stdhdu}. For a full
-discussion on what to give to these options, see the description of
-@option{--hdu} in @ref{Input output options}. To see their default values
-(along with all the other options), run Segment with the
-@option{--printparams} (or @option{-P}) option. Just recall that in the
-absence of @option{--detection} and @option{--std}, all three are assumed
-to be in the same file. If you only want to see Segment's default values
-for HDUs on your system, run this command:
+Besides the input dataset (for example astronomical image), Segment also needs 
to know the Sky standard deviation and the regions of the dataset that it 
should segment.
+The values dataset is assumed to be Sky subtracted by default.
+If it isn't, you can ask Segment to subtract the Sky internally by calling 
@option{--sky}.
+For the rest of this discussion, we'll assume it is already sky subtracted.
+
+The Sky and its standard deviation can be a single value (to be used for the 
whole dataset) or a separate dataset (for a separate value per pixel).
+If a dataset is used for the Sky and its standard deviation, they must either 
be the size of the input image, or have a single value per tile (generated with 
@option{--oneelempertile}, see @ref{Processing options} and @ref{Tessellation}).
+
+The detected regions/pixels can be specified as a detection map (for example 
see @ref{NoiseChisel output}).
+If @option{--detection=all}, Segment won't read any detection map and assume 
the whole input is a single detection.
+For example when the dataset is fully covered by a large nearby 
galaxy/globular cluster.
+
+When dataset are to be used for any of the inputs, Segment will assume they 
are multiple extensions of a single file by default (when @option{--std} or 
@option{--detection} aren't called).
+For example NoiseChisel's default output @ref{NoiseChisel output}.
+When the Sky-subtracted values are in one file, and the detection and Sky 
standard deviation are in another, you just need to use @option{--detection}: 
in the absence of @option{--std}, Segment will look for both the detection 
labels and Sky standard deviation in the file given to @option{--detection}.
+Ultimately, if all three are in separate files, you need to call both 
@option{--detection} and @option{--std}.
+
+The extensions of the three mandatory inputs can be specified with 
@option{--hdu}, @option{--dhdu}, and @option{--stdhdu}.
+For a full discussion on what to give to these options, see the description of 
@option{--hdu} in @ref{Input output options}.
+To see their default values (along with all the other options), run Segment 
with the @option{--printparams} (or @option{-P}) option.
+Just recall that in the absence of @option{--detection} and @option{--std}, 
all three are assumed to be in the same file.
+If you only want to see Segment's default values for HDUs on your system, run 
this command:
 
 @example
 $ astsegment -P | grep hdu
 @end example
 
-By default Segment will convolve the input with a kernel to improve the
-signal-to-noise ratio of true peaks. If you already have the convolved
-input dataset, you can pass it directly to Segment for faster processing
-(using the @option{--convolved} and @option{--chdu} options). Just don't
-forget that the convolved image must also be Sky-subtracted before calling
-Segment. If a value/file is given to @option{--sky}, the convolved values
-will also be Sky subtracted internally. Alternatively, if you prefer to
-give a kernel (with @option{--kernel} and @option{--khdu}), Segment can do
-the convolution internally. To disable convolution, use
-@option{--kernel=none}.
+By default Segment will convolve the input with a kernel to improve the 
signal-to-noise ratio of true peaks.
+If you already have the convolved input dataset, you can pass it directly to 
Segment for faster processing (using the @option{--convolved} and 
@option{--chdu} options).
+Just don't forget that the convolved image must also be Sky-subtracted before 
calling Segment.
+If a value/file is given to @option{--sky}, the convolved values will also be 
Sky subtracted internally.
+Alternatively, if you prefer to give a kernel (with @option{--kernel} and 
@option{--khdu}), Segment can do the convolution internally.
+To disable convolution, use @option{--kernel=none}.
 
 @table @option
 
 @item --sky=STR/FLT
-The Sky value(s) to subtract from the input. This option can either be
-given a constant number or a file name containing a dataset (multiple
-values, per pixel or per tile). By default, Segment will assume the input
-dataset is Sky subtracted, so this option is not mandatory.
+The Sky value(s) to subtract from the input.
+This option can either be given a constant number or a file name containing a 
dataset (multiple values, per pixel or per tile).
+By default, Segment will assume the input dataset is Sky subtracted, so this 
option is not mandatory.
 
-If the value can't be read as a number, it is assumed to be a file
-name. When the value is a file, the extension can be specified with
-@option{--skyhdu}. When its not a single number, the given dataset must
-either have the same size as the output or the same size as the
-tessellation (so there is one pixel per tile, see @ref{Tessellation}).
+If the value can't be read as a number, it is assumed to be a file name.
+When the value is a file, the extension can be specified with 
@option{--skyhdu}.
+When its not a single number, the given dataset must either have the same size 
as the output or the same size as the tessellation (so there is one pixel per 
tile, see @ref{Tessellation}).
 
-When this option is given, its value(s) will be subtracted from the input
-and the (optional) convolved dataset (given to @option{--convolved}) prior
-to starting the segmentation process.
+When this option is given, its value(s) will be subtracted from the input and 
the (optional) convolved dataset (given to @option{--convolved}) prior to 
starting the segmentation process.
 
 @item --skyhdu=STR/INT
-The HDU/extension containing the Sky values. This is mandatory when the
-value given to @option{--sky} is not a number. Please see the description
-of @option{--hdu} in @ref{Input output options} for the different ways you
-can identify a special extension.
+The HDU/extension containing the Sky values.
+This is mandatory when the value given to @option{--sky} is not a number.
+Please see the description of @option{--hdu} in @ref{Input output options} for 
the different ways you can identify a special extension.
 
 @item --std=STR/FLT
-The Sky standard deviation value(s) corresponding to the input. The value
-can either be a constant number or a file name containing a dataset
-(multiple values, per pixel or per tile). The Sky standard deviation is
-mandatory for Segment to operate.
-
-If the value can't be read as a number, it is assumed to be a file
-name. When the value is a file, the extension can be specified with
-@option{--skyhdu}. When its not a single number, the given dataset must
-either have the same size as the output or the same size as the
-tessellation (so there is one pixel per tile, see @ref{Tessellation}).
-
-When this option is not called, Segment will assume the standard deviation
-is a dataset and in a HDU/extension (@option{--stdhdu}) of another one of
-the input file(s). If a file is given to @option{--detection}, it will
-assume that file contains the standard deviation dataset, otherwise, it
-will look into input filename (the main argument, without any option).
+The Sky standard deviation value(s) corresponding to the input.
+The value can either be a constant number or a file name containing a dataset 
(multiple values, per pixel or per tile).
+The Sky standard deviation is mandatory for Segment to operate.
+
+If the value can't be read as a number, it is assumed to be a file name.
+When the value is a file, the extension can be specified with 
@option{--skyhdu}.
+When its not a single number, the given dataset must either have the same size 
as the output or the same size as the tessellation (so there is one pixel per 
tile, see @ref{Tessellation}).
+
+When this option is not called, Segment will assume the standard deviation is 
a dataset and in a HDU/extension (@option{--stdhdu}) of another one of the 
input file(s).
+If a file is given to @option{--detection}, it will assume that file contains 
the standard deviation dataset, otherwise, it will look into input filename 
(the main argument, without any option).
 
 @item --stdhdu=INT/STR
-The HDU/extension containing the Sky standard deviation values, when the
-value given to @option{--std} is a file name. Please see the description of
-@option{--hdu} in @ref{Input output options} for the different ways you can
-identify a special extension.
+The HDU/extension containing the Sky standard deviation values, when the value 
given to @option{--std} is a file name.
+Please see the description of @option{--hdu} in @ref{Input output options} for 
the different ways you can identify a special extension.
 
 @item --variance
-The input Sky standard deviation value/dataset is actually variance. When
-this option is called, the square root of input Sky standard deviation (see
-@option{--std}) is used internally, not its raw value(s).
+The input Sky standard deviation value/dataset is actually variance.
+When this option is called, the square root of input Sky standard deviation 
(see @option{--std}) is used internally, not its raw value(s).
 
 @item -d STR
 @itemx --detection=STR
-Detection map to use for segmentation. If given a value of @option{all},
-Segment will assume the whole dataset must be segmented, see below. If a
-detection map is given, the extension can be specified with
-@option{--dhdu}. If not given, Segment will assume the desired
-HDU/extension is in the main input argument (input file specified with no
-option).
-
-The final segmentation (clumps or objects) will only be over the non-zero
-pixels of this detection map. The dataset must have the same size as the
-input image. Only datasets with an integer type are acceptable for the
-labeled image, see @ref{Numeric data types}. If your detection map only has
-integer values, but it is stored in a floating point container, you can use
-Gnuastro's Arithmetic program (see @ref{Arithmetic}) to convert it to an
-integer container, like the example below:
+Detection map to use for segmentation.
+If given a value of @option{all}, Segment will assume the whole dataset must 
be segmented, see below.
+If a detection map is given, the extension can be specified with 
@option{--dhdu}.
+If not given, Segment will assume the desired HDU/extension is in the main 
input argument (input file specified with no option).
+
+The final segmentation (clumps or objects) will only be over the non-zero 
pixels of this detection map.
+The dataset must have the same size as the input image.
+Only datasets with an integer type are acceptable for the labeled image, see 
@ref{Numeric data types}.
+If your detection map only has integer values, but it is stored in a floating 
point container, you can use Gnuastro's Arithmetic program (see 
@ref{Arithmetic}) to convert it to an integer container, like the example below:
 
 @example
 $ astarithmetic float.fits int32 --output=int.fits
 @end example
 
-It may happen that the whole input dataset is covered by signal, for
-example when working on parts of the Andromeda galaxy, or nearby globular
-clusters (that cover the whole field of view). In such cases, segmentation
-is necessary over the complete dataset, not just specific regions
-(detections). By default Segment will first use the undetected regions as a
-reference to find the proper signal-to-noise ratio of ``true'' clumps (give
-a purity level specified with @option{--snquant}). Therefore, in such
-scenarios you also need to manually give a ``true'' clump signal-to-noise
-ratio with the @option{--clumpsnthresh} option to disable looking into the
-undetected regions, see @ref{Segmentation options}. In such cases, is
-possible to make a detection map that only has the value @code{1} for all
-pixels (for example using @ref{Arithmetic}), but for convenience, you can
-also use @option{--detection=all}.
+It may happen that the whole input dataset is covered by signal, for example 
when working on parts of the Andromeda galaxy, or nearby globular clusters 
(that cover the whole field of view).
+In such cases, segmentation is necessary over the complete dataset, not just 
specific regions (detections).
+By default Segment will first use the undetected regions as a reference to 
find the proper signal-to-noise ratio of ``true'' clumps (give a purity level 
specified with @option{--snquant}).
+Therefore, in such scenarios you also need to manually give a ``true'' clump 
signal-to-noise ratio with the @option{--clumpsnthresh} option to disable 
looking into the undetected regions, see @ref{Segmentation options}.
+In such cases, is possible to make a detection map that only has the value 
@code{1} for all pixels (for example using @ref{Arithmetic}), but for 
convenience, you can also use @option{--detection=all}.
 
 @item --dhdu
-The HDU/extension containing the detection map given to
-@option{--detection}. Please see the description of @option{--hdu} in
-@ref{Input output options} for the different ways you can identify a
-special extension.
+The HDU/extension containing the detection map given to @option{--detection}.
+Please see the description of @option{--hdu} in @ref{Input output options} for 
the different ways you can identify a special extension.
 
 @item -k STR
 @itemx --kernel=STR
-The kernel used to convolve the input image. The usage of this option is
-identical to NoiseChisel's @option{--kernel} option (@ref{NoiseChisel
-input}). Please see the descriptions there for more. To disable
-convolution, you can give it a value of @option{none}.
+The kernel used to convolve the input image.
+The usage of this option is identical to NoiseChisel's @option{--kernel} 
option (@ref{NoiseChisel input}).
+Please see the descriptions there for more.
+To disable convolution, you can give it a value of @option{none}.
 
 @item --khdu
-The HDU/extension containing the kernel used for convolution. For
-acceptable values, please see the description of @option{--hdu} in
-@ref{Input output options}.
+The HDU/extension containing the kernel used for convolution.
+For acceptable values, please see the description of @option{--hdu} in 
@ref{Input output options}.
 
 @item --convolved
-The convolved image to avoid internal convolution by Segment. The usage of
-this option is identical to NoiseChisel's @option{--convolved}
-option. Please see @ref{NoiseChisel input} for a thorough discussion of the
-usefulness and best practices of using this option.
-
-If you want to use the same convolution kernel for detection (with
-@ref{NoiseChisel}) and segmentation, with this option, you can use the same
-convolved image (that is also available in NoiseChisel) and avoid two
-convolutions. However, just be careful to use the input to NoiseChisel as
-the input to Segment also, then use the @option{--sky} and @option{--std}
-to specify the Sky and its standard deviation (from NoiseChisel's
-output). Recall that when NoiseChisel is not called with
-@option{--rawoutput}, the first extension of NoiseChisel's output is the
-@emph{Sky-subtracted} input (see @ref{NoiseChisel output}). So if you use
-the same convolved image that you fed to NoiseChisel, but use NoiseChisel's
-output with Segment's @option{--convolved}, then the convolved image won't
-be Sky subtracted.
+The convolved image to avoid internal convolution by Segment.
+The usage of this option is identical to NoiseChisel's @option{--convolved} 
option.
+Please see @ref{NoiseChisel input} for a thorough discussion of the usefulness 
and best practices of using this option.
+
+If you want to use the same convolution kernel for detection (with 
@ref{NoiseChisel}) and segmentation, with this option, you can use the same 
convolved image (that is also available in NoiseChisel) and avoid two 
convolutions.
+However, just be careful to use the input to NoiseChisel as the input to 
Segment also, then use the @option{--sky} and @option{--std} to specify the Sky 
and its standard deviation (from NoiseChisel's output).
+Recall that when NoiseChisel is not called with @option{--rawoutput}, the 
first extension of NoiseChisel's output is the @emph{Sky-subtracted} input (see 
@ref{NoiseChisel output}).
+So if you use the same convolved image that you fed to NoiseChisel, but use 
NoiseChisel's output with Segment's @option{--convolved}, then the convolved 
image won't be Sky subtracted.
 
 @item --chdu
-The HDU/extension containing the convolved image (given to
-@option{--convolved}). For acceptable values, please see the description of
-@option{--hdu} in @ref{Input output options}.
+The HDU/extension containing the convolved image (given to 
@option{--convolved}).
+For acceptable values, please see the description of @option{--hdu} in 
@ref{Input output options}.
 
 @item -L INT[,INT]
 @itemx --largetilesize=INT[,INT]
-The size of the large tiles to use for identifying the clump S/N threshold
-over the undetected regions. The usage of this option is identical to
-NoiseChisel's @option{--largetilesize} option (@ref{NoiseChisel
-input}). Please see the descriptions there for more.
-
-The undetected regions can be a significant fraction of the dataset and
-finding clumps requires sorting of the desired regions, which can be
-slow. To speed up the processing, Segment finds clumps in the undetected
-regions over separate large tiles. This allows it to have to sort a much
-smaller set of pixels and also to treat them independently and in
-parallel. Both these issues greatly speed it up. Just be sure to not
-decrease the large tile sizes too much (less than 100 pixels in each
-dimension). It is important for them to be much larger than the clumps.
+The size of the large tiles to use for identifying the clump S/N threshold 
over the undetected regions.
+The usage of this option is identical to NoiseChisel's 
@option{--largetilesize} option (@ref{NoiseChisel input}).
+Please see the descriptions there for more.
+
+The undetected regions can be a significant fraction of the dataset and 
finding clumps requires sorting of the desired regions, which can be slow.
+To speed up the processing, Segment finds clumps in the undetected regions 
over separate large tiles.
+This allows it to have to sort a much smaller set of pixels and also to treat 
them independently and in parallel.
+Both these issues greatly speed it up.
+Just be sure to not decrease the large tile sizes too much (less than 100 
pixels in each dimension).
+It is important for them to be much larger than the clumps.
 
 @end table
 
@@ -18835,182 +13799,128 @@ dimension). It is important for them to be much 
larger than the clumps.
 @node Segmentation options, Segment output, Segment input, Invoking astsegment
 @subsubsection Segmentation options
 
-The options below can be used to configure every step of the segmentation
-process in the Segment program. For a more complete explanation (with
-figures to demonstrate each step), please see Section 3.2 of
-@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}, and
-also @ref{Segment}. By default, Segment will follow the procedure described
-in the paper to find the S/N threshold based on the noise properties. This
-can be disabled by directly giving a trustable signal-to-noise ratio to the
-@option{--clumpsnthresh} option.
+The options below can be used to configure every step of the segmentation 
process in the Segment program.
+For a more complete explanation (with figures to demonstrate each step), 
please see Section 3.2 of @url{https://arxiv.org/abs/1505.01664, Akhlaghi and 
Ichikawa [2015]}, and also @ref{Segment}.
+By default, Segment will follow the procedure described in the paper to find 
the S/N threshold based on the noise properties.
+This can be disabled by directly giving a trustable signal-to-noise ratio to 
the @option{--clumpsnthresh} option.
 
-Recall that you can always see the full list of Gnuastro's options with the
-@option{--help} (see @ref{Getting help}), or @option{--printparams} (or
-@option{-P}) to see their values (see @ref{Operating mode options}).
+Recall that you can always see the full list of Gnuastro's options with the 
@option{--help} (see @ref{Getting help}), or @option{--printparams} (or 
@option{-P}) to see their values (see @ref{Operating mode options}).
 
 @table @option
 
 @item -B FLT
 @itemx --minskyfrac=FLT
-Minimum fraction (value between 0 and 1) of Sky (undetected) areas in a
-large tile. Only (large) tiles with a fraction of undetected pixels (Sky)
-greater than this value will be used for finding clumps. The clumps found
-in the undetected areas will be used to estimate a S/N threshold for true
-clumps. Therefore this is an important option (to decrease) in crowded
-fields. Operationally, this is almost identical to NoiseChisel's
-@option{--minskyfrac} option (@ref{Detection options}). Please see the
-descriptions there for more.
+Minimum fraction (value between 0 and 1) of Sky (undetected) areas in a large 
tile.
+Only (large) tiles with a fraction of undetected pixels (Sky) greater than 
this value will be used for finding clumps.
+The clumps found in the undetected areas will be used to estimate a S/N 
threshold for true clumps.
+Therefore this is an important option (to decrease) in crowded fields.
+Operationally, this is almost identical to NoiseChisel's @option{--minskyfrac} 
option (@ref{Detection options}).
+Please see the descriptions there for more.
 
 @item --minima
-Build the clumps based on the local minima, not maxima. By default, clumps
-are built starting from local maxima (see Figure 8 of
-@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa
-[2015]}). Therefore, this option can be useful when you are searching for
-true local minima (for example absorption features).
+Build the clumps based on the local minima, not maxima.
+By default, clumps are built starting from local maxima (see Figure 8 of 
@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}).
+Therefore, this option can be useful when you are searching for true local 
minima (for example absorption features).
 
 @item -m INT
 @itemx --snminarea=INT
-The minimum area which a clump in the undetected regions should have in
-order to be considered in the clump Signal to noise ratio measurement. If
-this size is set to a small value, the Signal to noise ratio of false
-clumps will not be accurately found. It is recommended that this value be
-larger than the value to NoiseChisel's @option{--snminarea}. Because the
-clumps are found on the convolved (smoothed) image while the
-pseudo-detections are found on the input image. You can use
-@option{--checksn} and @option{--checksegmentation} to see if your chosen
-value is reasonable or not.
+The minimum area which a clump in the undetected regions should have in order 
to be considered in the clump Signal to noise ratio measurement.
+If this size is set to a small value, the Signal to noise ratio of false 
clumps will not be accurately found.
+It is recommended that this value be larger than the value to NoiseChisel's 
@option{--snminarea}.
+Because the clumps are found on the convolved (smoothed) image while the 
pseudo-detections are found on the input image.
+You can use @option{--checksn} and @option{--checksegmentation} to see if your 
chosen value is reasonable or not.
 
 @item --checksn
-Save the S/N values of the clumps over the sky and detected regions into
-separate tables. If @option{--tableformat} is a FITS format, each table will
-be written into a separate extension of one file suffixed with
-@file{_clumpsn.fits}. If it is plain text, a separate file will be made for
-each table (ending in @file{_clumpsn_sky.txt} and
-@file{_clumpsn_det.txt}). For more on @option{--tableformat} see @ref{Input
-output options}.
-
-You can use these tables to inspect the S/N values and their distribution
-(in combination with the @option{--checksegmentation} option to see where
-the clumps are). You can use Gnuastro's @ref{Statistics} to make a
-histogram of the distribution (ready for plotting in a text file, or a
-crude ASCII-art demonstration on the command-line).
-
-With this option, Segment will abort as soon as the two tables are
-created. This allows you to inspect the steps leading to the final S/N
-quantile threshold, this behavior can be disabled with
-@option{--continueaftercheck}.
+Save the S/N values of the clumps over the sky and detected regions into 
separate tables.
+If @option{--tableformat} is a FITS format, each table will be written into a 
separate extension of one file suffixed with @file{_clumpsn.fits}.
+If it is plain text, a separate file will be made for each table (ending in 
@file{_clumpsn_sky.txt} and @file{_clumpsn_det.txt}).
+For more on @option{--tableformat} see @ref{Input output options}.
+
+You can use these tables to inspect the S/N values and their distribution (in 
combination with the @option{--checksegmentation} option to see where the 
clumps are).
+You can use Gnuastro's @ref{Statistics} to make a histogram of the 
distribution (ready for plotting in a text file, or a crude ASCII-art 
demonstration on the command-line).
+
+With this option, Segment will abort as soon as the two tables are created.
+This allows you to inspect the steps leading to the final S/N quantile 
threshold, this behavior can be disabled with @option{--continueaftercheck}.
 
 @item --minnumfalse=INT
-The minimum number of clumps over undetected (Sky) regions to identify the
-requested Signal-to-Noise ratio threshold. Operationally, this is almost
-identical to NoiseChisel's @option{--minnumfalse} option (@ref{Detection
-options}). Please see the descriptions there for more.
+The minimum number of clumps over undetected (Sky) regions to identify the 
requested Signal-to-Noise ratio threshold.
+Operationally, this is almost identical to NoiseChisel's 
@option{--minnumfalse} option (@ref{Detection options}).
+Please see the descriptions there for more.
 
 @item -c FLT
 @itemx --snquant=FLT
-The quantile of the signal-to-noise ratio distribution of clumps in
-undetected regions, used to define true clumps. After identifying all the
-usable clumps in the undetected regions of the dataset, the given quantile
-of their signal-to-noise ratios is used to define the signal-to-noise ratio
-of a ``true'' clump. Effectively, this can be seen as an inverse p-value
-measure. See Figure 9 and Section 3.2.1 of
-@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]} for a
-complete explanation. The full distribution of clump signal-to-noise ratios
-over the undetected areas can be saved into a table with @option{--checksn}
-option and visually inspected with @option{--checksegmentation}.
+The quantile of the signal-to-noise ratio distribution of clumps in undetected 
regions, used to define true clumps.
+After identifying all the usable clumps in the undetected regions of the 
dataset, the given quantile of their signal-to-noise ratios is used to define 
the signal-to-noise ratio of a ``true'' clump.
+Effectively, this can be seen as an inverse p-value measure.
+See Figure 9 and Section 3.2.1 of @url{https://arxiv.org/abs/1505.01664, 
Akhlaghi and Ichikawa [2015]} for a complete explanation.
+The full distribution of clump signal-to-noise ratios over the undetected 
areas can be saved into a table with @option{--checksn} option and visually 
inspected with @option{--checksegmentation}.
 
 @item -v
 @itemx --keepmaxnearriver
-Keep a clump whose maximum (minimum if @option{--minima} is called) flux is
-8-connected to a river pixel. By default such clumps over detections are
-considered to be noise and are removed irrespective of their brightness
-(see @ref{Flux Brightness and magnitude}). Over large profiles, that sink
-into the noise very slowly, noise can cause part of the profile (which was
-flat without noise) to become a very large and with a very high Signal to
-noise ratio. In such cases, the pixel with the maximum flux in the clump
-will be immediately touching a river pixel.
+Keep a clump whose maximum (minimum if @option{--minima} is called) flux is 
8-connected to a river pixel.
+By default such clumps over detections are considered to be noise and are 
removed irrespective of their brightness (see @ref{Flux Brightness and 
magnitude}).
+Over large profiles, that sink into the noise very slowly, noise can cause 
part of the profile (which was flat without noise) to become a very large and 
with a very high Signal to noise ratio.
+In such cases, the pixel with the maximum flux in the clump will be 
immediately touching a river pixel.
 
 @item -s FLT
 @itemx --clumpsnthresh=FLT
-The signal-to-noise threshold for true clumps. If this option is given,
-then the segmentation options above will be ignored and the given value
-will be directly used to identify true clumps over the detections. This can
-be useful if you have a large dataset with similar noise properties. You
-can find a robust signal-to-noise ratio based on a (sufficiently large)
-smaller portion of the dataset. Afterwards, with this option, you can speed
-up the processing on the whole dataset. Other scenarios where this option
-may be useful is when, the image might not contain enough/any Sky regions.
+The signal-to-noise threshold for true clumps.
+If this option is given, then the segmentation options above will be ignored 
and the given value will be directly used to identify true clumps over the 
detections.
+This can be useful if you have a large dataset with similar noise properties.
+You can find a robust signal-to-noise ratio based on a (sufficiently large) 
smaller portion of the dataset.
+Afterwards, with this option, you can speed up the processing on the whole 
dataset.
+Other scenarios where this option may be useful is when, the image might not 
contain enough/any Sky regions.
 
 @item -G FLT
 @itemx --gthresh=FLT
-Threshold (multiple of the sky standard deviation added with the sky) to
-stop growing true clumps. Once true clumps are found, they are set as the
-basis to segment the detected region. They are grown until the threshold
-specified by this option.
+Threshold (multiple of the sky standard deviation added with the sky) to stop 
growing true clumps.
+Once true clumps are found, they are set as the basis to segment the detected 
region.
+They are grown until the threshold specified by this option.
 
 @item -y INT
 @itemx --minriverlength=INT
-The minimum length of a river between two grown clumps for it to be
-considered in signal-to-noise ratio estimations. Similar to
-@option{--snminarea}, if the length of the river is too short, the
-signal-to-noise ratio can be noisy and unreliable. Any existing rivers
-shorter than this length will be considered as non-existent, independent of
-their Signal to noise ratio. The clumps are grown on the input image,
-therefore this value can be smaller than the value given to
-@option{--snminarea}. Recall that the clumps were defined on the convolved
-image so @option{--snminarea} should be larger.
+The minimum length of a river between two grown clumps for it to be considered 
in signal-to-noise ratio estimations.
+Similar to @option{--snminarea}, if the length of the river is too short, the 
signal-to-noise ratio can be noisy and unreliable.
+Any existing rivers shorter than this length will be considered as 
non-existent, independent of their Signal to noise ratio.
+The clumps are grown on the input image, therefore this value can be smaller 
than the value given to @option{--snminarea}.
+Recall that the clumps were defined on the convolved image so 
@option{--snminarea} should be larger.
 
 @item -O FLT
 @itemx --objbordersn=FLT
-The maximum Signal to noise ratio of the rivers between two grown clumps in
-order to consider them as separate `objects'. If the Signal to noise ratio
-of the river between two grown clumps is larger than this value, they are
-defined to be part of one `object'. Note that the physical reality of these
-`objects' can never be established with one image, or even multiple images
-from one broad-band filter. Any method we devise to define `object's over a
-detected region is ultimately subjective.
-
-Two very distant galaxies or satellites in one halo might lie in the
-same line of sight and be detected as clumps on one detection. On the
-other hand, the connection (through a spiral arm or tidal tail for
-example) between two parts of one galaxy might have such a low surface
-brightness that they are broken up into multiple detections or
-objects. In fact if you have noticed, exactly for this purpose, this is
-the only Signal to noise ratio that the user gives into
-NoiseChisel. The `true' detections and clumps can be objectively
-identified from the noise characteristics of the image, so you don't
-have to give any hand input Signal to noise ratio.
+The maximum Signal to noise ratio of the rivers between two grown clumps in 
order to consider them as separate `objects'.
+If the Signal to noise ratio of the river between two grown clumps is larger 
than this value, they are defined to be part of one `object'.
+Note that the physical reality of these `objects' can never be established 
with one image, or even multiple images from one broad-band filter.
+Any method we devise to define `object's over a detected region is ultimately 
subjective.
+
+Two very distant galaxies or satellites in one halo might lie in the same line 
of sight and be detected as clumps on one detection.
+On the other hand, the connection (through a spiral arm or tidal tail for 
example) between two parts of one galaxy might have such a low surface 
brightness that they are broken up into multiple detections or objects.
+In fact if you have noticed, exactly for this purpose, this is the only Signal 
to noise ratio that the user gives into NoiseChisel.
+The `true' detections and clumps can be objectively identified from the noise 
characteristics of the image, so you don't have to give any hand input Signal 
to noise ratio.
 
 @item --checksegmentation
-A file with the suffix @file{_seg.fits} will be created. This file keeps
-all the relevant steps in finding true clumps and segmenting the detections
-into multiple objects in various extensions. Having read the paper or the
-steps above. Examining this file can be an excellent guide in choosing the
-best set of parameters. Note that calling this function will significantly
-slow NoiseChisel. In verbose mode (without the @option{--quiet} option, see
-@ref{Operating mode options}) the important steps (along with their
-extension names) will also be reported.
-
-With this option, NoiseChisel will abort as soon as the two tables are
-created. This behavior can be disabled with @option{--continueaftercheck}.
+A file with the suffix @file{_seg.fits} will be created.
+This file keeps all the relevant steps in finding true clumps and segmenting 
the detections into multiple objects in various extensions.
+Having read the paper or the steps above.
+Examining this file can be an excellent guide in choosing the best set of 
parameters.
+Note that calling this function will significantly slow NoiseChisel.
+In verbose mode (without the @option{--quiet} option, see @ref{Operating mode 
options}) the important steps (along with their extension names) will also be 
reported.
+
+With this option, NoiseChisel will abort as soon as the two tables are created.
+This behavior can be disabled with @option{--continueaftercheck}.
 
 @end table
 
 @node Segment output,  , Segmentation options, Invoking astsegment
 @subsubsection Segment output
 
-The main output of Segment are two label datasets (with integer types,
-separating the dataset's elements into different classes). They have
-HDU/extension names of @code{CLUMPS} and @code{OBJECTS}.
+The main output of Segment are two label datasets (with integer types, 
separating the dataset's elements into different classes).
+They have HDU/extension names of @code{CLUMPS} and @code{OBJECTS}.
 
-Similar to all Gnuastro's FITS outputs, the zero-th extension/HDU of the
-main output file only contains header keywords and image or table. It
-contains the Segment input files and parameters (option names and values)
-as FITS keywords. Note that if an option name is longer than 8 characters,
-the keyword name is the second word. The first word is
-@code{HIERARCH}. Also note that according to the FITS standard, the keyword
-names must be in capital letters, therefore, if you want to use Grep to
-inspect these keywords, use the @option{-i} option, like the example below.
+Similar to all Gnuastro's FITS outputs, the zero-th extension/HDU of the main 
output file only contains header keywords and image or table.
+It contains the Segment input files and parameters (option names and values) 
as FITS keywords.
+Note that if an option name is longer than 8 characters, the keyword name is 
the second word.
+The first word is @code{HIERARCH}.
+Also note that according to the FITS standard, the keyword names must be in 
capital letters, therefore, if you want to use Grep to inspect these keywords, 
use the @option{-i} option, like the example below.
 
 @example
 $ astfits image_segmented.fits -h0 | grep -i snquant
@@ -19018,99 +13928,65 @@ $ astfits image_segmented.fits -h0 | grep -i snquant
 
 @cindex DS9
 @cindex SAO DS9
-By default, besides the @code{CLUMPS} and @code{OBJECTS} extensions,
-Segment's output will also contain the (technically redundant) input
-dataset and the sky standard deviation dataset (if it wasn't a constant
-number). This can help in visually inspecting the result when viewing the
-images as a ``Multi-extension data cube'' in SAO DS9 for example (see
-@ref{Viewing multiextension FITS images}). You can simply flip through the
-extensions and see the same region of the image and its corresponding
-clumps/object labels. It also makes it easy to feed the output (as one
-file) into MakeCatalog when you intend to make a catalog afterwards (see
-@ref{MakeCatalog}. To remove these redundant extensions from the output
-(for example when designing a pipeline), you can use
-@option{--rawoutput}.
-
-The @code{OBJECTS} and @code{CLUMPS} extensions can be used as input into
-@ref{MakeCatalog} to generate a catalog for higher-level analysis. If you
-want to treat each clump separately, you can give a very large value (or
-even a NaN, which will always fail) to the @option{--gthresh} option (for
-example @code{--gthresh=1e10} or @code{--gthresh=nan}), see
-@ref{Segmentation options}.
-
-For a complete definition of clumps and objects, please see Section 3.2 of
-@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]} and
-@ref{Segmentation options}. The clumps are ``true'' local maxima (minima if
-@option{--minima} is called) and their surrounding pixels until a local
-minimum/maximum (caused by noise fluctuations, or another ``true''
-clump). Therefore it may happen that some of the input detections aren't
-covered by clumps at all (very diffuse objects without any strong peak),
-while some objects may contain many clumps. Even in those that have clumps,
-there will be regions that are too diffuse. The diffuse regions (within the
-input detected regions) are given a negative label (-1) to help you
-separate them from the undetected regions (with a value of zero).
-
-Each clump is labeled with respect to its host object. Therefore, if an
-object has three clumps for example, the clumps within it have labels 1, 2
-and 3. As a result, if an initial detected region has multiple objects,
-each with a single clump, all the clumps will have a label of 1. The total
-number of clumps in the dataset is stored in the @code{NCLUMPS} keyword of
-the @code{CLUMPS} extension and printed in the verbose output of Segment
-(when @option{--quiet} is not called).
-
-The @code{OBJECTS} extension of the output will give a positive
-counter/label to every detected pixel in the input. As described in
-Akhlaghi and Ichikawa [2015], the true clumps are grown until a certain
-threshold. If the grown clumps touch other clumps and the connection is
-strong enough, they are considered part of the same @emph{object}. Once
-objects (grown clumps) are identified, they are grown to cover the whole
-detected area.
+By default, besides the @code{CLUMPS} and @code{OBJECTS} extensions, Segment's 
output will also contain the (technically redundant) input dataset and the sky 
standard deviation dataset (if it wasn't a constant number).
+This can help in visually inspecting the result when viewing the images as a 
``Multi-extension data cube'' in SAO DS9 for example (see @ref{Viewing 
multiextension FITS images}).
+You can simply flip through the extensions and see the same region of the 
image and its corresponding clumps/object labels.
+It also makes it easy to feed the output (as one file) into MakeCatalog when 
you intend to make a catalog afterwards (see @ref{MakeCatalog}.
+To remove these redundant extensions from the output (for example when 
designing a pipeline), you can use @option{--rawoutput}.
+
+The @code{OBJECTS} and @code{CLUMPS} extensions can be used as input into 
@ref{MakeCatalog} to generate a catalog for higher-level analysis.
+If you want to treat each clump separately, you can give a very large value 
(or even a NaN, which will always fail) to the @option{--gthresh} option (for 
example @code{--gthresh=1e10} or @code{--gthresh=nan}), see @ref{Segmentation 
options}.
+
+For a complete definition of clumps and objects, please see Section 3.2 of 
@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]} and 
@ref{Segmentation options}.
+The clumps are ``true'' local maxima (minima if @option{--minima} is called) 
and their surrounding pixels until a local minimum/maximum (caused by noise 
fluctuations, or another ``true'' clump).
+Therefore it may happen that some of the input detections aren't covered by 
clumps at all (very diffuse objects without any strong peak), while some 
objects may contain many clumps.
+Even in those that have clumps, there will be regions that are too diffuse.
+The diffuse regions (within the input detected regions) are given a negative 
label (-1) to help you separate them from the undetected regions (with a value 
of zero).
+
+Each clump is labeled with respect to its host object.
+Therefore, if an object has three clumps for example, the clumps within it 
have labels 1, 2 and 3.
+As a result, if an initial detected region has multiple objects, each with a 
single clump, all the clumps will have a label of 1.
+The total number of clumps in the dataset is stored in the @code{NCLUMPS} 
keyword of the @code{CLUMPS} extension and printed in the verbose output of 
Segment (when @option{--quiet} is not called).
+
+The @code{OBJECTS} extension of the output will give a positive counter/label 
to every detected pixel in the input.
+As described in Akhlaghi and Ichikawa [2015], the true clumps are grown until 
a certain threshold.
+If the grown clumps touch other clumps and the connection is strong enough, 
they are considered part of the same @emph{object}.
+Once objects (grown clumps) are identified, they are grown to cover the whole 
detected area.
 
 The options to configure the output of Segment are listed below:
 
 @table @option
 @item --continueaftercheck
-Don't abort Segment after producing the check image(s). The usage of this
-option is identical to NoiseChisel's @option{--continueaftercheck} option
-(@ref{NoiseChisel input}). Please see the descriptions there for more.
+Don't abort Segment after producing the check image(s).
+The usage of this option is identical to NoiseChisel's 
@option{--continueaftercheck} option (@ref{NoiseChisel input}).
+Please see the descriptions there for more.
 
 @item --onlyclumps
-Abort Segment after finding true clumps and don't continue with finding
-options. Therefore, no @code{OBJECTS} extension will be present in the
-output. Each true clump in @code{CLUMPS} will get a unique label, but
-diffuse regions will still have a negative value.
+Abort Segment after finding true clumps and don't continue with finding 
options.
+Therefore, no @code{OBJECTS} extension will be present in the output.
+Each true clump in @code{CLUMPS} will get a unique label, but diffuse regions 
will still have a negative value.
 
-To make a catalog of the clumps, the input detection map (where all the
-labels are one) can be fed into @ref{MakeCatalog} along with the input
-detection map to Segment (that only had a value of @code{1} for all
-detected pixels) with @option{--clumpscat}. In this way, MakeCatalog will
-assume all the clumps belong to a single ``object''.
+To make a catalog of the clumps, the input detection map (where all the labels 
are one) can be fed into @ref{MakeCatalog} along with the input detection map 
to Segment (that only had a value of @code{1} for all detected pixels) with 
@option{--clumpscat}.
+In this way, MakeCatalog will assume all the clumps belong to a single 
``object''.
 
 @item --grownclumps
-In the output @code{CLUMPS} extension, store the grown clumps. If a
-detected region contains no clumps or only one clump, then it will be fully
-given a label of @code{1} (no negative valued pixels).
+In the output @code{CLUMPS} extension, store the grown clumps.
+If a detected region contains no clumps or only one clump, then it will be 
fully given a label of @code{1} (no negative valued pixels).
 
 @item --rawoutput
-Only write the @code{CLUMPS} and @code{OBJECTS} datasets in the output
-file. Without this option (by default), the first and last extensions of
-the output will the Sky-subtracted input dataset and the Sky standard
-deviation dataset (if it wasn't a number). When the datasets are small,
-these redundant extensions can make it convenient to inspect the results
-visually or feed the output to @ref{MakeCatalog} for
-measurements. Ultimately both the input and Sky standard deviation datasets
-are redundant (you had them before running Segment). When the inputs are
-large/numerous, these extra dataset can be a burden.
+Only write the @code{CLUMPS} and @code{OBJECTS} datasets in the output file.
+Without this option (by default), the first and last extensions of the output 
will the Sky-subtracted input dataset and the Sky standard deviation dataset 
(if it wasn't a number).
+When the datasets are small, these redundant extensions can make it convenient 
to inspect the results visually or feed the output to @ref{MakeCatalog} for 
measurements.
+Ultimately both the input and Sky standard deviation datasets are redundant 
(you had them before running Segment).
+When the inputs are large/numerous, these extra dataset can be a burden.
 @end table
 
 @cartouche
 @noindent
 @cindex Compression
-@strong{Save space:} with the @option{--rawoutput}, Segment's output will
-only be two labeled datasets (only containing integers). Since they have no
-noise, such datasets can be compressed very effectively (without any loss
-of data) with exceptionally high compression ratios. You can use the
-following command to compress it with the best ratio:
+@strong{Save space:} with the @option{--rawoutput}, Segment's output will only 
be two labeled datasets (only containing integers).
+Since they have no noise, such datasets can be compressed very effectively 
(without any loss of data) with exceptionally high compression ratios.
+You can use the following command to compress it with the best ratio:
 
 @cindex GNU Gzip
 @example
@@ -19118,10 +13994,7 @@ $ gzip --best segment_output.fits
 @end example
 
 @noindent
-The resulting @file{.fits.gz} file can then be fed into any of Gnuastro's
-programs directly, without having to decompress it separately (it will just
-take them a little longer, because they have to decompress it internally
-before use).
+The resulting @file{.fits.gz} file can then be fed into any of Gnuastro's 
programs directly, without having to decompress it separately (it will just 
take them a little longer, because they have to decompress it internally before 
use).
 @end cartouche
 
 When the input is a 2D image, to inspect NoiseChisel's output you can
@@ -19144,41 +14017,22 @@ images}.
 @node MakeCatalog, Match, Segment, Data analysis
 @section MakeCatalog
 
-At the lowest level, a dataset (for example an image) is just a collection
-of values, placed after each other in any number of dimensions (for example
-an image is a 2D dataset). Each data-element (pixel) just has two
-properties: its position (relative to the rest) and its value. In
-higher-level analysis, an entire dataset (an image for example) is rarely
-treated as a singular entity@footnote{You can derive the over-all
-properties of a complete dataset (1D table column, 2D image, or 3D
-data-cube) treated as a single entity with Gnuastro's Statistics program
-(see @ref{Statistics}).}. You usually want to know/measure the properties
-of the (separate) scientifically interesting targets that are embedded in
-it. For example the magnitudes, positions and elliptical properties of the
-galaxies that are in the image.
-
-MakeCatalog is Gnuastro's program for localized measurements over a
-dataset. In other words, MakeCatalog is Gnuastro's program to convert
-low-level datasets (like images), to high level catalogs. The role of
-MakeCatalog in a scientific analysis and the benefits of its model (where
-detection/segmentation is separated from measurement) is discussed in
-@url{https://arxiv.org/abs/1611.06387v1, Akhlaghi [2016]}@footnote{A
-published paper cannot undergo any more change, so this manual is the
-definitive guide.} and summarized in @ref{Detection and catalog
-production}. We strongly recommend reading this short paper for a better
-understanding of this methodology. Understanding the effective usage of
-MakeCatalog, will thus also help effective use of other (lower-level)
-Gnuastro's programs like @ref{NoiseChisel} or @ref{Segment}.
-
-It is important to define your regions of interest for measurements
-@emph{before} running MakeCatalog. MakeCatalog is specialized in doing
-measurements accurately and efficiently. Therefore MakeCatalog will not do
-detection, segmentation, or defining apertures on requested positions in
-your dataset. Following Gnuastro's modularity principle, there are separate
-and highly specialized and customizable programs in Gnuastro for these
-other jobs as shown below (for a usage example in a real-world analysis,
-see @ref{General program usage tutorial} and @ref{Detecting large extended
-targets}).
+At the lowest level, a dataset (for example an image) is just a collection of 
values, placed after each other in any number of dimensions (for example an 
image is a 2D dataset).
+Each data-element (pixel) just has two properties: its position (relative to 
the rest) and its value.
+In higher-level analysis, an entire dataset (an image for example) is rarely 
treated as a singular entity@footnote{You can derive the over-all properties of 
a complete dataset (1D table column, 2D image, or 3D data-cube) treated as a 
single entity with Gnuastro's Statistics program (see @ref{Statistics}).}.
+You usually want to know/measure the properties of the (separate) 
scientifically interesting targets that are embedded in it.
+For example the magnitudes, positions and elliptical properties of the 
galaxies that are in the image.
+
+MakeCatalog is Gnuastro's program for localized measurements over a dataset.
+In other words, MakeCatalog is Gnuastro's program to convert low-level 
datasets (like images), to high level catalogs.
+The role of MakeCatalog in a scientific analysis and the benefits of its model 
(where detection/segmentation is separated from measurement) is discussed in 
@url{https://arxiv.org/abs/1611.06387v1, Akhlaghi [2016]}@footnote{A published 
paper cannot undergo any more change, so this manual is the definitive guide.} 
and summarized in @ref{Detection and catalog production}.
+We strongly recommend reading this short paper for a better understanding of 
this methodology.
+Understanding the effective usage of MakeCatalog, will thus also help 
effective use of other (lower-level) Gnuastro's programs like @ref{NoiseChisel} 
or @ref{Segment}.
+
+It is important to define your regions of interest for measurements 
@emph{before} running MakeCatalog.
+MakeCatalog is specialized in doing measurements accurately and efficiently.
+Therefore MakeCatalog will not do detection, segmentation, or defining 
apertures on requested positions in your dataset.
+Following Gnuastro's modularity principle, there are separate and highly 
specialized and customizable programs in Gnuastro for these other jobs as shown 
below (for a usage example in a real-world analysis, see @ref{General program 
usage tutorial} and @ref{Detecting large extended targets}).
 
 @itemize
 @item
@@ -19194,37 +14048,22 @@ targets}).
 @ref{MakeProfiles}: Aperture creation for known positions.
 @end itemize
 
-These programs will/can return labeled dataset(s) to be fed into
-MakeCatalog. A labeled dataset for measurement has the same size/dimensions
-as the input, but with integer valued pixels that have the label/counter
-for each sub-set of pixels that must be measured together. For example all
-the pixels covering one galaxy in an image, get the same label.
-
-The requested measurements are then done on similarly labeled pixels. The
-final result is a catalog where each row corresponds to the measurements on
-pixels with a specific label. For example the flux weighted average position
-of all the pixels with a label of 42 will be written into the 42nd row of
-the output catalog/table's central position column@footnote{See
-@ref{Measuring elliptical parameters} for a discussion on this and the
-derivation of positional parameters, which includes the
-center.}. Similarly, the sum of all these pixels will be the 42nd row in
-the brightness column and etc. Pixels with labels equal to, or smaller
-than, zero will be ignored by MakeCatalog. In other words, the number of
-rows in MakeCatalog's output is already known before running it (the
-maximum value of the labeled dataset).
-
-Before getting into the details of running MakeCatalog (in @ref{Invoking
-astmkcatalog}, we'll start with a discussion on the basics of its approach
-to separating detection from measurements in @ref{Detection and catalog
-production}. A very important factor in any measurement is understanding
-its validity range, or limits. Therefore in @ref{Quantifying measurement
-limits}, we'll discuss how to estimate the reliability of the detection and
-basic measurements. This section will continue with a derivation of
-elliptical parameters from the labeled datasets in @ref{Measuring
-elliptical parameters}. For those who feel MakeCatalog's existing
-measurements/columns aren't enough and would like to add further
-measurements, in @ref{Adding new columns to MakeCatalog}, a checklist of
-steps is provided for readily adding your own new measurements/columns.
+These programs will/can return labeled dataset(s) to be fed into MakeCatalog.
+A labeled dataset for measurement has the same size/dimensions as the input, 
but with integer valued pixels that have the label/counter for each sub-set of 
pixels that must be measured together.
+For example all the pixels covering one galaxy in an image, get the same label.
+
+The requested measurements are then done on similarly labeled pixels.
+The final result is a catalog where each row corresponds to the measurements 
on pixels with a specific label.
+For example the flux weighted average position of all the pixels with a label 
of 42 will be written into the 42nd row of the output catalog/table's central 
position column@footnote{See @ref{Measuring elliptical parameters} for a 
discussion on this and the derivation of positional parameters, which includes 
the center.}.
+Similarly, the sum of all these pixels will be the 42nd row in the brightness 
column, etc.
+Pixels with labels equal to, or smaller than, zero will be ignored by 
MakeCatalog.
+In other words, the number of rows in MakeCatalog's output is already known 
before running it (the maximum value of the labeled dataset).
+
+Before getting into the details of running MakeCatalog (in @ref{Invoking 
astmkcatalog}, we'll start with a discussion on the basics of its approach to 
separating detection from measurements in @ref{Detection and catalog 
production}.
+A very important factor in any measurement is understanding its validity 
range, or limits.
+Therefore in @ref{Quantifying measurement limits}, we'll discuss how to 
estimate the reliability of the detection and basic measurements.
+This section will continue with a derivation of elliptical parameters from the 
labeled datasets in @ref{Measuring elliptical parameters}.
+For those who feel MakeCatalog's existing measurements/columns aren't enough 
and would like to add further measurements, in @ref{Adding new columns to 
MakeCatalog}, a checklist of steps is provided for readily adding your own new 
measurements/columns.
 
 @menu
 * Detection and catalog production::  Discussing why/how to treat these 
separately
@@ -19237,68 +14076,42 @@ steps is provided for readily adding your own new 
measurements/columns.
 @node Detection and catalog production, Quantifying measurement limits, 
MakeCatalog, MakeCatalog
 @subsection Detection and catalog production
 
-Most existing common tools in low-level astronomical data-analysis (for
-example
-SExtractor@footnote{@url{https://www.astromatic.net/software/sextractor}})
-merge the two processes of detection and measurement (catalog production)
-in one program. However, in light of Gnuastro's modularized approach
-(modeled on the Unix system) detection is separated from measurements and
-catalog production. This modularity is therefore new to many experienced
-astronomers and deserves a short review here. Further discussion on the
-benefits of this methodology can be seen in
-@url{https://arxiv.org/abs/1611.06387v1, Akhlaghi [2016]}.
-
-As discussed in the introduction of @ref{MakeCatalog}, detection
-(identifying which pixels to do measurements on) can be done with different
-programs. Their outputs (a labeled dataset) can be directly fed into
-MakeCatalog to do the measurements and write the result as a
-catalog/table. Beyond that, Gnuastro's modular approach has many benefits
-that will become clear as you get more experienced in astronomical data
-analysis and want to be more creative in using your valuable data for the
-exciting scientific project you are working on. In short the reasons for
-this modularity can be classified as below:
+Most existing common tools in low-level astronomical data-analysis (for 
example 
SExtractor@footnote{@url{https://www.astromatic.net/software/sextractor}}) 
merge the two processes of detection and measurement (catalog production) in 
one program.
+However, in light of Gnuastro's modularized approach (modeled on the Unix 
system) detection is separated from measurements and catalog production.
+This modularity is therefore new to many experienced astronomers and deserves 
a short review here.
+Further discussion on the benefits of this methodology can be seen in 
@url{https://arxiv.org/abs/1611.06387v1, Akhlaghi [2016]}.
+
+As discussed in the introduction of @ref{MakeCatalog}, detection (identifying 
which pixels to do measurements on) can be done with different programs.
+Their outputs (a labeled dataset) can be directly fed into MakeCatalog to do 
the measurements and write the result as a catalog/table.
+Beyond that, Gnuastro's modular approach has many benefits that will become 
clear as you get more experienced in astronomical data analysis and want to be 
more creative in using your valuable data for the exciting scientific project 
you are working on.
+In short the reasons for this modularity can be classified as below:
 
 @itemize
 
 @item
-Simplicity/robustness of independent, modular tools: making a catalog is a
-logically separate process from labeling (detection, segmentation, or
-aperture production). A user might want to do certain operations on the
-labeled regions before creating a catalog for them. Another user might want
-the properties of the same pixels/objects in another image (another filter
-for example) to measure the colors or SED fittings.
-
-Here is an example of doing both: suppose you have images in various
-broad band filters at various resolutions and orientations. The image
-of one color will thus not lie exactly on another or even be in the
-same scale. However, it is imperative that the same pixels be used in
-measuring the colors of galaxies.
-
-To solve the problem, NoiseChisel can be run on the reference image to
-generate the labeled detection image. Afterwards, the labeled image can be
-warped into the grid of the other color (using @ref{Warp}). MakeCatalog
-will then generate the same catalog for both colors (with the different
-labeled images). It is currently customary to warp the images to the same
-pixel grid, however, modification of the scientific dataset is very harmful
-for the data and creates correlated noise. It is much more accurate to do
-the transformations on the labeled image.
+Simplicity/robustness of independent, modular tools: making a catalog is a 
logically separate process from labeling (detection, segmentation, or aperture 
production).
+A user might want to do certain operations on the labeled regions before 
creating a catalog for them.
+Another user might want the properties of the same pixels/objects in another 
image (another filter for example) to measure the colors or SED fittings.
+
+Here is an example of doing both: suppose you have images in various broad 
band filters at various resolutions and orientations.
+The image of one color will thus not lie exactly on another or even be in the 
same scale.
+However, it is imperative that the same pixels be used in measuring the colors 
of galaxies.
+
+To solve the problem, NoiseChisel can be run on the reference image to 
generate the labeled detection image.
+Afterwards, the labeled image can be warped into the grid of the other color 
(using @ref{Warp}).
+MakeCatalog will then generate the same catalog for both colors (with the 
different labeled images).
+It is currently customary to warp the images to the same pixel grid, however, 
modification of the scientific dataset is very harmful for the data and creates 
correlated noise.
+It is much more accurate to do the transformations on the labeled image.
 
 @item
-Complexity of a monolith: Adding in a catalog functionality to the detector
-program will add several more steps (and many more options) to its
-processing that can equally well be done outside of it. This makes
-following what the program does harder for the users and developers, it can
-also potentially add many bugs.
-
-As an example, if the parameter you want to measure over one profile is not
-provided by the developers of MakeCatalog. You can simply open this tiny
-little program and add your desired calculation easily. This process is
-discussed in @ref{Adding new columns to MakeCatalog}. However, if making a
-catalog was part of NoiseChisel for example, adding a new
-column/measurement would require a lot of energy to understand all the
-steps and internal structures of that huge program. It might even be so
-intertwined with its processing, that adding new columns might cause
-problems/bugs in its primary job (detection).
+Complexity of a monolith: Adding in a catalog functionality to the detector 
program will add several more steps (and many more options) to its processing 
that can equally well be done outside of it.
+This makes following what the program does harder for the users and 
developers, it can also potentially add many bugs.
+
+As an example, if the parameter you want to measure over one profile is not 
provided by the developers of MakeCatalog.
+You can simply open this tiny little program and add your desired calculation 
easily.
+This process is discussed in @ref{Adding new columns to MakeCatalog}.
+However, if making a catalog was part of NoiseChisel for example, adding a new 
column/measurement would require a lot of energy to understand all the steps 
and internal structures of that huge program.
+It might even be so intertwined with its processing, that adding new columns 
might cause problems/bugs in its primary job (detection).
 
 @end itemize
 
@@ -19316,157 +14129,93 @@ problems/bugs in its primary job (detection).
 @cindex Object magnitude limit
 @cindex Limit, object/clump magnitude
 @cindex Magnitude, object/clump detection limit
-No measurement on a real dataset can be perfect: you can only reach a
-certain level/limit of accuracy. Therefore, a meaningful (scientific)
-analysis requires an understanding of these limits for the dataset and your
-analysis tools: different datasets have different noise properties and
-different detection methods (one method/algorith/software that is run with
-a different set of parameters is considered as a different detection
-method) will have different abilities to detect or measure certain kinds of
-signal (astronomical objects) and their properties in the dataset. Hence,
-quantifying the detection and measurement limitations with a particular
-dataset and analysis tool is the most crucial/critical aspect of any
-high-level analysis.
-
-Here, we'll review some of the most general limits that are important in
-any astronomical data analysis and how MakeCatalog makes it easy to find
-them. Depending on the higher-level analysis, there are more tests that
-must be done, but these are relatively low-level and usually necessary in
-most cases. In astronomy, it is common to use the magnitude (a unit-less
-scale) and physical units, see @ref{Flux Brightness and
-magnitude}. Therefore the measurements discussed here are commonly used in
-units of magnitudes.
+No measurement on a real dataset can be perfect: you can only reach a certain 
level/limit of accuracy.
+Therefore, a meaningful (scientific) analysis requires an understanding of 
these limits for the dataset and your analysis tools: different datasets have 
different noise properties and different detection methods (one 
method/algorithm/software that is run with a different set of parameters is 
considered as a different detection method) will have different abilities to 
detect or measure certain kinds of signal (astronomical objects) and their 
properties in the dataset.
+Hence, quantifying the detection and measurement limitations with a particular 
dataset and analysis tool is the most crucial/critical aspect of any high-level 
analysis.
+
+Here, we'll review some of the most general limits that are important in any 
astronomical data analysis and how MakeCatalog makes it easy to find them.
+Depending on the higher-level analysis, there are more tests that must be 
done, but these are relatively low-level and usually necessary in most cases.
+In astronomy, it is common to use the magnitude (a unit-less scale) and 
physical units, see @ref{Flux Brightness and magnitude}.
+Therefore the measurements discussed here are commonly used in units of 
magnitudes.
 
 @table @asis
 
 @item Surface brightness limit (of whole dataset)
 @cindex Surface brightness
-As we make more observations on one region of the sky, and add the
-observations into one dataset, the signal and noise both increase. However,
-the signal increase much faster than the noise: assuming you add @mymath{N}
-datasets with equal exposure times, the signal will increases as a multiple
-of @mymath{N}, while noise increases as @mymath{\sqrt{N}}. Thus this
-increases the signal-to-noise ratio. Qualitatively, fainter (per pixel)
-parts of the objects/signal in the image will become more
-visible/detectable. The noise-level is known as the dataset's surface
-brightness limit.
-
-You can think of the noise as muddy water that is completely covering a
-flat ground@footnote{The ground is the sky value in this analogy, see
-@ref{Sky value}. Note that this analogy only holds for a flat sky value
-across the surface of the image or ground.}. The signal (or astronomical
-objects in this analogy) will be summits/hills that start from the flat sky
-level (under the muddy water) and can sometimes reach outside of the muddy
-water. Let's assume that in your first observation the muddy water has just
-been stirred and you can't see anything through it. As you wait and make
-more observations/exposures, the mud settles down and the @emph{depth} of
-the transparent water increases, making the summits visible. As the depth
-of clear water increases, the parts of the hills with lower heights (parts
-with lower surface brightness) can be seen more clearly. In this analogy,
-height (from the ground) is @emph{surface brightness}@footnote{Note that
-this muddy water analogy is not perfect, because while the water-level
-remains the same all over a peak, in data analysis, the Poisson noise
-increases with the level of data.} and the height of the muddy water is
-your surface brightness limit.
+As we make more observations on one region of the sky, and add the 
observations into one dataset, the signal and noise both increase.
+However, the signal increase much faster than the noise: assuming you add 
@mymath{N} datasets with equal exposure times, the signal will increases as a 
multiple of @mymath{N}, while noise increases as @mymath{\sqrt{N}}.
+Thus this increases the signal-to-noise ratio.
+Qualitatively, fainter (per pixel) parts of the objects/signal in the image 
will become more visible/detectable.
+The noise-level is known as the dataset's surface brightness limit.
+
+You can think of the noise as muddy water that is completely covering a flat 
ground@footnote{The ground is the sky value in this analogy, see @ref{Sky 
value}.
+Note that this analogy only holds for a flat sky value across the surface of 
the image or ground.}.
+The signal (or astronomical objects in this analogy) will be summits/hills 
that start from the flat sky level (under the muddy water) and can sometimes 
reach outside of the muddy water.
+Let's assume that in your first observation the muddy water has just been 
stirred and you can't see anything through it.
+As you wait and make more observations/exposures, the mud settles down and the 
@emph{depth} of the transparent water increases, making the summits visible.
+As the depth of clear water increases, the parts of the hills with lower 
heights (parts with lower surface brightness) can be seen more clearly.
+In this analogy, height (from the ground) is @emph{surface 
brightness}@footnote{Note that this muddy water analogy is not perfect, because 
while the water-level remains the same all over a peak, in data analysis, the 
Poisson noise increases with the level of data.} and the height of the muddy 
water is your surface brightness limit.
 
 @cindex Data's depth
-The outputs of NoiseChisel include the Sky standard deviation
-(@mymath{\sigma}) on every group of pixels (a mesh) that were calculated
-from the undetected pixels in each tile, see @ref{Tessellation} and
-@ref{NoiseChisel output}. Let's take @mymath{\sigma_m} as the median
-@mymath{\sigma} over the successful meshes in the image (prior to
-interpolation or smoothing).
-
-On different instruments, pixels have different physical sizes (for example
-in micro-meters, or spatial angle over the sky). Nevertheless, a pixel is
-our unit of data collection. In other words, while quantifying the noise,
-the physical or projected size of the pixels is irrelevant. We thus define
-the Surface brightness limit or @emph{depth}, in units of magnitude/pixel,
-of a data-set, with zeropoint magnitude @mymath{z}, with the @mymath{n}th
-multiple of @mymath{\sigma_m} as (see @ref{Flux Brightness and magnitude}):
+The outputs of NoiseChisel include the Sky standard deviation 
(@mymath{\sigma}) on every group of pixels (a mesh) that were calculated from 
the undetected pixels in each tile, see @ref{Tessellation} and @ref{NoiseChisel 
output}.
+Let's take @mymath{\sigma_m} as the median @mymath{\sigma} over the successful 
meshes in the image (prior to interpolation or smoothing).
+
+On different instruments, pixels have different physical sizes (for example in 
micro-meters, or spatial angle over the sky).
+Nevertheless, a pixel is our unit of data collection.
+In other words, while quantifying the noise, the physical or projected size of 
the pixels is irrelevant.
+We thus define the Surface brightness limit or @emph{depth}, in units of 
magnitude/pixel, of a data-set, with zeropoint magnitude @mymath{z}, with the 
@mymath{n}th multiple of @mymath{\sigma_m} as (see @ref{Flux Brightness and 
magnitude}):
 
 @dispmath{SB_{\rm Pixel}=-2.5\times\log_{10}{(n\sigma_m)}+z}
 
 @cindex XDF survey
 @cindex CANDELS survey
 @cindex eXtreme Deep Field (XDF) survey
-As an example, the XDF survey covers part of the sky that the Hubble space
-telescope has observed the most (for 85 orbits) and is consequently very
-small (@mymath{\sim4} arcmin@mymath{^2}). On the other hand, the CANDELS
-survey, is one of the widest multi-color surveys covering several fields
-(about 720 arcmin@mymath{^2}) but its deepest fields have only 9 orbits
-observation. The depth of the XDF and CANDELS-deep surveys in the near
-infrared WFC3/F160W filter are respectively 34.40 and 32.45
-magnitudes/pixel. In a single orbit image, this same field has a depth of
-31.32. Recall that a larger magnitude corresponds to less brightness.
-
-The low-level magnitude/pixel measurement above is only useful when all the
-datasets you want to use belong to one instrument (telescope and
-camera). However, you will often find yourself using datasets from various
-instruments with different pixel scales (projected pixel sizes). If we know
-the pixel scale, we can obtain a more easily comparable surface brightness
-limit in units of: magnitude/arcsec@mymath{^2}. Let's assume that the
-dataset has a zeropoint value of @mymath{z}, and every pixel is @mymath{p}
-arcsec@mymath{^2} (so @mymath{A/p} is the number of pixels that cover an
-area of @mymath{A} arcsec@mymath{^2}). If the surface brightness is desired
-at the @mymath{n}th multiple of @mymath{\sigma_m}, the following equation
-(in units of magnitudes per A arcsec@mymath{^2}) can be used:
-
-@dispmath{SB_{\rm Projected}=-2.5\times\log_{10}{\left(n\sigma_m\sqrt{A\over
-p}\right)+z}}
-
-Note that this is just an extrapolation of the per-pixel measurement
-@mymath{\sigma_m}. So it should be used with extreme care: for example the
-dataset must have an approximately flat depth or noise properties
-overall. A more accurate measure for each detection over the dataset is
-known as the @emph{upper-limit magnitude} which actually uses random
-positioning of each detection's area/footprint (see below). It doesn't
-extrapolate and even accounts for correlated noise patterns in relation to
-that detection. Therefore, the upper-limit magnitude is a much better
-measure of your dataset's surface brightness limit for each particular
-object.
-
-MakeCatalog will calculate the input dataset's @mymath{SB_{\rm Pixel}} and
-@mymath{SB_{\rm Projected}} and write them as comments/meta-data in the
-output catalog(s). Just note that @mymath{SB_{\rm Projected}} is only
-calculated if the input has World Coordinate System (WCS).
+As an example, the XDF survey covers part of the sky that the Hubble space 
telescope has observed the most (for 85 orbits) and is consequently very small 
(@mymath{\sim4} arcmin@mymath{^2}).
+On the other hand, the CANDELS survey, is one of the widest multi-color 
surveys covering several fields (about 720 arcmin@mymath{^2}) but its deepest 
fields have only 9 orbits observation.
+The depth of the XDF and CANDELS-deep surveys in the near infrared WFC3/F160W 
filter are respectively 34.40 and 32.45 magnitudes/pixel.
+In a single orbit image, this same field has a depth of 31.32.
+Recall that a larger magnitude corresponds to less brightness.
+
+The low-level magnitude/pixel measurement above is only useful when all the 
datasets you want to use belong to one instrument (telescope and camera).
+However, you will often find yourself using datasets from various instruments 
with different pixel scales (projected pixel sizes).
+If we know the pixel scale, we can obtain a more easily comparable surface 
brightness limit in units of: magnitude/arcsec@mymath{^2}.
+Let's assume that the dataset has a zeropoint value of @mymath{z}, and every 
pixel is @mymath{p} arcsec@mymath{^2} (so @mymath{A/p} is the number of pixels 
that cover an area of @mymath{A} arcsec@mymath{^2}).
+If the surface brightness is desired at the @mymath{n}th multiple of 
@mymath{\sigma_m}, the following equation (in units of magnitudes per A 
arcsec@mymath{^2}) can be used:
+
+@dispmath{SB_{\rm Projected}=-2.5\times\log_{10}{\left(n\sigma_m\sqrt{A\over 
p}\right)+z}}
+
+Note that this is just an extrapolation of the per-pixel measurement 
@mymath{\sigma_m}.
+So it should be used with extreme care: for example the dataset must have an 
approximately flat depth or noise properties overall.
+A more accurate measure for each detection over the dataset is known as the 
@emph{upper-limit magnitude} which actually uses random positioning of each 
detection's area/footprint (see below).
+It doesn't extrapolate and even accounts for correlated noise patterns in 
relation to that detection.
+Therefore, the upper-limit magnitude is a much better measure of your 
dataset's surface brightness limit for each particular object.
+
+MakeCatalog will calculate the input dataset's @mymath{SB_{\rm Pixel}} and 
@mymath{SB_{\rm Projected}} and write them as comments/meta-data in the output 
catalog(s).
+Just note that @mymath{SB_{\rm Projected}} is only calculated if the input has 
World Coordinate System (WCS).
 
 @item Completeness limit (of each detection)
 @cindex Completeness
-As the surface brightness of the objects decreases, the ability to detect
-them will also decrease. An important statistic is thus the fraction of
-objects of similar morphology and brightness that will be identified with
-our detection algorithm/parameters in the given image. This fraction is
-known as completeness. For brighter objects, completeness is 1: all bright
-objects that might exist over the image will be detected. However, as we go
-to objects of lower overall surface brightness, we will fail to detect
-some, and gradually we are not able to detect anything any more. For a
-given profile, the magnitude where the completeness drops below a certain
-level (usually above @mymath{90\%}) is known as the completeness limit.
+As the surface brightness of the objects decreases, the ability to detect them 
will also decrease.
+An important statistic is thus the fraction of objects of similar morphology 
and brightness that will be identified with our detection algorithm/parameters 
in the given image.
+This fraction is known as completeness.
+For brighter objects, completeness is 1: all bright objects that might exist 
over the image will be detected.
+However, as we go to objects of lower overall surface brightness, we will fail 
to detect some, and gradually we are not able to detect anything any more.
+For a given profile, the magnitude where the completeness drops below a 
certain level (usually above @mymath{90\%}) is known as the completeness limit.
 
 @cindex Purity
 @cindex False detections
 @cindex Detections false
-Another important parameter in measuring completeness is purity: the
-fraction of true detections to all true detections. In effect purity is the
-measure of contamination by false detections: the higher the purity, the
-lower the contamination. Completeness and purity are anti-correlated: if we
-can allow a large number of false detections (that we might be able to
-remove by other means), we can significantly increase the completeness
-limit.
-
-One traditional way to measure the completeness and purity of a given
-sample is by embedding mock profiles in regions of the image with no
-detection. However in such a study we must be really careful to choose
-model profiles as similar to the target of interest as possible.
+Another important parameter in measuring completeness is purity: the fraction 
of true detections to all true detections.
+In effect purity is the measure of contamination by false detections: the 
higher the purity, the lower the contamination.
+Completeness and purity are anti-correlated: if we can allow a large number of 
false detections (that we might be able to remove by other means), we can 
significantly increase the completeness limit.
+
+One traditional way to measure the completeness and purity of a given sample 
is by embedding mock profiles in regions of the image with no detection.
+However in such a study we must be really careful to choose model profiles as 
similar to the target of interest as possible.
 
 @item Magnitude measurement error (of each detection)
-Any measurement has an error and this includes the derived magnitude for an
-object. Note that this value is only meaningful when the object's magnitude
-is brighter than the upper-limit magnitude (see the next items in this
-list). As discussed in @ref{Flux Brightness and magnitude}, the magnitude
-(@mymath{M}) of an object with brightness @mymath{B} and Zeropoint
-magnitude @mymath{z} can be written as:
+Any measurement has an error and this includes the derived magnitude for an 
object.
+Note that this value is only meaningful when the object's magnitude is 
brighter than the upper-limit magnitude (see the next items in this list).
+As discussed in @ref{Flux Brightness and magnitude}, the magnitude 
(@mymath{M}) of an object with brightness @mymath{B} and Zeropoint magnitude 
@mymath{z} can be written as:
 
 @dispmath{M=-2.5\log_{10}(B)+z}
 
@@ -19476,83 +14225,49 @@ Calculating the derivative with respect to 
@mymath{B}, we get:
 @dispmath{{dM\over dB} = {-2.5\over {B\times ln(10)}}}
 
 @noindent
-From the Tailor series (@mymath{\Delta{M}=dM/dB\times\Delta{B}}), we can
-write:
+From the Tailor series (@mymath{\Delta{M}=dM/dB\times\Delta{B}}), we can write:
 
 @dispmath{\Delta{M} = \left|{-2.5\over ln(10)}\right|\times{\Delta{B}\over{B}}}
 
 @noindent
-But, @mymath{\Delta{B}/B} is just the inverse of the Signal-to-noise
-ratio (@mymath{S/N}), so we can write the error in magnitude in terms of
-the signal-to-noise ratio:
+But, @mymath{\Delta{B}/B} is just the inverse of the Signal-to-noise ratio 
(@mymath{S/N}), so we can write the error in magnitude in terms of the 
signal-to-noise ratio:
 
 @dispmath{ \Delta{M} = {2.5\over{S/N\times ln(10)}} }
 
-MakeCatalog uses this relation to estimate the magnitude errors. The
-signal-to-noise ratio is calculated in different ways for clumps and
-objects (see @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa
-[2015]}), but this single equation can be used to estimate the measured
-magnitude error afterwards for any type of target.
+MakeCatalog uses this relation to estimate the magnitude errors.
+The signal-to-noise ratio is calculated in different ways for clumps and 
objects (see @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa 
[2015]}), but this single equation can be used to estimate the measured 
magnitude error afterwards for any type of target.
 
 @item Upper limit magnitude (of each detection)
-Due to the noisy nature of data, it is possible to get arbitrarily low
-values for a faint object's brightness (or arbitrarily high
-@emph{magnitudes}). Given the scatter caused by the dataset's noise, values
-fainter than a certain level are meaningless: another similar depth
-observation will give a radically different value.
-
-For example, while the depth of the image is 32 magnitudes/pixel, a
-measurement that gives a magnitude of 36 for a @mymath{\sim100} pixel
-object is clearly unreliable. In another similar depth image, we might
-measure a magnitude of 30 for it, and yet another might give
-33. Furthermore, due to the noise scatter so close to the depth of the
-data-set, the total brightness might actually get measured as a negative
-value, so no magnitude can be defined (recall that a magnitude is a base-10
-logarithm). This problem usually becomes relevant when the detection labels
-were not derived from the values being measured (for example when you are
-estimating colors, see @ref{MakeCatalog}).
+Due to the noisy nature of data, it is possible to get arbitrarily low values 
for a faint object's brightness (or arbitrarily high @emph{magnitudes}).
+Given the scatter caused by the dataset's noise, values fainter than a certain 
level are meaningless: another similar depth observation will give a radically 
different value.
+
+For example, while the depth of the image is 32 magnitudes/pixel, a 
measurement that gives a magnitude of 36 for a @mymath{\sim100} pixel object is 
clearly unreliable.
+In another similar depth image, we might measure a magnitude of 30 for it, and 
yet another might give 33.
+Furthermore, due to the noise scatter so close to the depth of the data-set, 
the total brightness might actually get measured as a negative value, so no 
magnitude can be defined (recall that a magnitude is a base-10 logarithm).
+This problem usually becomes relevant when the detection labels were not 
derived from the values being measured (for example when you are estimating 
colors, see @ref{MakeCatalog}).
 
 @cindex Upper limit magnitude
 @cindex Magnitude, upper limit
-Using such unreliable measurements will directly affect our analysis, so we
-must not use the raw measurements. But how can we know how reliable a
-measurement on a given dataset is?
-
-When we confront such unreasonably faint magnitudes, there is one thing we
-can deduce: that if something actually exists here (possibly buried deep
-under the noise), it's inherent magnitude is fainter than an @emph{upper
-limit magnitude}. To find this upper limit magnitude, we place the object's
-footprint (segmentation map) over random parts of the image where there are
-no detections, so we only have pure (possibly correlated) noise, along with
-undetected objects. Doing this a large number of times will give us a
-distribution of brightness values. The standard deviation (@mymath{\sigma})
-of that distribution can be used to quantify the upper limit magnitude.
+Using such unreliable measurements will directly affect our analysis, so we 
must not use the raw measurements.
+But how can we know how reliable a measurement on a given dataset is?
+
+When we confront such unreasonably faint magnitudes, there is one thing we can 
deduce: that if something actually exists here (possibly buried deep under the 
noise), it's inherent magnitude is fainter than an @emph{upper limit magnitude}.
+To find this upper limit magnitude, we place the object's footprint 
(segmentation map) over random parts of the image where there are no 
detections, so we only have pure (possibly correlated) noise, along with 
undetected objects.
+Doing this a large number of times will give us a distribution of brightness 
values.
+The standard deviation (@mymath{\sigma}) of that distribution can be used to 
quantify the upper limit magnitude.
 
 @cindex Correlated noise
-Traditionally, faint/small object photometry was done using fixed circular
-apertures (for example with a diameter of @mymath{N} arc-seconds). Hence,
-the upper limit was like the depth discussed above: one value for the whole
-image. The problem with this simplified approach is that the number of
-pixels in the aperture directly affects the final distribution and thus
-magnitude. Also the image correlated noise might actually create certain
-patters, so the shape of the object can also affect the final result
-result. Fortunately, with the much more advanced hardware and software of
-today, we can make customized segmentation maps for each object.
-
-When requested, MakeCatalog will randomly place each target's footprint
-over the dataset as described above and estimate the resulting
-distribution's properties (like the upper limit magnitude). The procedure
-is fully configurable with the options in @ref{Upper-limit settings}. If
-one value for the whole image is required, you can either use the surface
-brightness limit above or make a circular aperture and feed it into
-MakeCatalog to request an upper-limit magnitude for it@footnote{If you
-intend to make apertures manually and not use a detection map (for example
-from @ref{Segment}), don't forget to use the @option{--upmaskfile} to give
-NoiseChisel's output (or any a binary map, marking detected pixels, see
-@ref{NoiseChisel output}) as a mask. Otherwise, the footprints may randomly
-fall over detections, giving highly skewed distributions, with wrong
-upper-limit distributions. See The description of @option{--upmaskfile} in
-@ref{Upper-limit settings} for more.}.
+Traditionally, faint/small object photometry was done using fixed circular 
apertures (for example with a diameter of @mymath{N} arc-seconds).
+Hence, the upper limit was like the depth discussed above: one value for the 
whole image.
+The problem with this simplified approach is that the number of pixels in the 
aperture directly affects the final distribution and thus magnitude.
+Also the image correlated noise might actually create certain patters, so the 
shape of the object can also affect the final result result.
+Fortunately, with the much more advanced hardware and software of today, we 
can make customized segmentation maps for each object.
+
+When requested, MakeCatalog will randomly place each target's footprint over 
the dataset as described above and estimate the resulting distribution's 
properties (like the upper limit magnitude).
+The procedure is fully configurable with the options in @ref{Upper-limit 
settings}.
+If one value for the whole image is required, you can either use the surface 
brightness limit above or make a circular aperture and feed it into MakeCatalog 
to request an upper-limit magnitude for it@footnote{If you intend to make 
apertures manually and not use a detection map (for example from 
@ref{Segment}), don't forget to use the @option{--upmaskfile} to give 
NoiseChisel's output (or any a binary map, marking detected pixels, see 
@ref{NoiseChisel output}) as a mask.
+Otherwise, the footprints may randomly fall over detections, giving highly 
skewed distributions, with wrong upper-limit distributions.
+See The description of @option{--upmaskfile} in @ref{Upper-limit settings} for 
more.}.
 
 @end table
 
@@ -19562,33 +14277,25 @@ upper-limit distributions. See The description of 
@option{--upmaskfile} in
 @node Measuring elliptical parameters, Adding new columns to MakeCatalog, 
Quantifying measurement limits, MakeCatalog
 @subsection Measuring elliptical parameters
 
-The shape or morphology of a target is one of the most commonly desired
-parameters of a target. Here, we will review the derivation of the most
-basic/simple morphological parameters: the elliptical parameters for a set
-of labeled pixels. The elliptical parameters are: the (semi-)major axis,
-the (semi-)minor axis and the position angle along with the central
-position of the profile. The derivations below follow the SExtractor manual
-derivations with some added explanations for easier reading.
+The shape or morphology of a target is one of the most commonly desired 
parameters of a target.
+Here, we will review the derivation of the most basic/simple morphological 
parameters: the elliptical parameters for a set of labeled pixels.
+The elliptical parameters are: the (semi-)major axis, the (semi-)minor axis 
and the position angle along with the central position of the profile.
+The derivations below follow the SExtractor manual derivations with some added 
explanations for easier reading.
 
 @cindex Moments
-Let's begin with one dimension for simplicity: Assume we have a set of
-@mymath{N} values @mymath{B_i} (for example showing the spatial
-distribution of a target's brightness), each at position @mymath{x_i}. The
-simplest parameter we can define is the geometric center of the object
-(@mymath{x_g}) (ignoring the brightness values):
-@mymath{x_g=(\sum_ix_i)/N}. @emph{Moments} are defined to incorporate both
-the value (brightness) and position of the data. The first moment can be
-written as:
+Let's begin with one dimension for simplicity: Assume we have a set of 
@mymath{N} values @mymath{B_i} (for example showing the spatial distribution of 
a target's brightness), each at position @mymath{x_i}.
+The simplest parameter we can define is the geometric center of the object 
(@mymath{x_g}) (ignoring the brightness values): @mymath{x_g=(\sum_ix_i)/N}.
+@emph{Moments} are defined to incorporate both the value (brightness) and 
position of the data.
+The first moment can be written as:
 
 @dispmath{\overline{x}={\sum_iB_ix_i \over \sum_iB_i}}
 
 @cindex Variance
 @cindex Second moment
 @noindent
-This is essentially the weighted (by @mymath{B_i}) mean position. The
-geometric center (@mymath{x_g}, defined above) is a special case of this
-with all @mymath{B_i=1}. The second moment is essentially the variance of
-the distribution:
+This is essentially the weighted (by @mymath{B_i}) mean position.
+The geometric center (@mymath{x_g}, defined above) is a special case of this 
with all @mymath{B_i=1}.
+The second moment is essentially the variance of the distribution:
 
 @dispmath{\overline{x^2}\equiv{\sum_iB_i(x_i-\overline{x})^2 \over
         \sum_iB_i} = {\sum_iB_ix_i^2 \over \sum_iB_i} -
@@ -19597,56 +14304,34 @@ the distribution:
 
 @cindex Standard deviation
 @noindent
-The last step was done from the definition of @mymath{\overline{x}}. Hence,
-the square root of @mymath{\overline{x^2}} is the spatial standard
-deviation (along the one-dimension) of this particular brightness
-distribution (@mymath{B_i}). Crudely (or qualitatively), you can think of
-its square root as the distance (from @mymath{\overline{x}}) which contains
-a specific amount of the flux (depending on the @mymath{B_i}
-distribution). Similar to the first moment, the geometric second moment can
-be found by setting all @mymath{B_i=1}. So while the first moment
-quantified the position of the brightness distribution, the second moment
-quantifies how that brightness is dispersed about the first moment. In
-other words, it quantifies how ``sharp'' the object's image is.
+The last step was done from the definition of @mymath{\overline{x}}.
+Hence, the square root of @mymath{\overline{x^2}} is the spatial standard 
deviation (along the one-dimension) of this particular brightness distribution 
(@mymath{B_i}).
+Crudely (or qualitatively), you can think of its square root as the distance 
(from @mymath{\overline{x}}) which contains a specific amount of the flux 
(depending on the @mymath{B_i} distribution).
+Similar to the first moment, the geometric second moment can be found by 
setting all @mymath{B_i=1}.
+So while the first moment quantified the position of the brightness 
distribution, the second moment quantifies how that brightness is dispersed 
about the first moment.
+In other words, it quantifies how ``sharp'' the object's image is.
 
 @cindex Floating point error
-Before continuing to two dimensions and the derivation of the elliptical
-parameters, let's pause for an important implementation technicality. You
-can ignore this paragraph and the next two if you don't want to implement
-these concepts. The basic definition (first definition of
-@mymath{\overline{x^2}} above) can be used without any major
-problem. However, using this fraction requires two runs over the data: one
-run to find @mymath{\overline{x}} and another run to find
-@mymath{\overline{x^2}} from @mymath{\overline{x}}, this can be slow. The
-advantage of the last fraction above, is that we can estimate both the
-first and second moments in one run (since the @mymath{-\overline{x}^2}
-term can easily be added later).
-
-The logarithmic nature of floating point number digitization creates a
-complication however: suppose the object is located between pixels 10000
-and 10020. Hence the target's pixels are only distributed over 20 pixels
-(with a standard deviation @mymath{<20}), while the mean has a value of
-@mymath{\sim10000}. The @mymath{\sum_iB_i^2x_i^2} will go to very very
-large values while the individual pixel differences will be orders of
-magnitude smaller. This will lower the accuracy of our calculation due to
-the limited accuracy of floating point operations. The variance only
-depends on the distance of each point from the mean, so we can shift all
-position by a constant/arbitrary @mymath{K} which is much closer to the
-mean: @mymath{\overline{x-K}=\overline{x}-K}. Hence we can calculate the
-second order moment using:
+Before continuing to two dimensions and the derivation of the elliptical 
parameters, let's pause for an important implementation technicality.
+You can ignore this paragraph and the next two if you don't want to implement 
these concepts.
+The basic definition (first definition of @mymath{\overline{x^2}} above) can 
be used without any major problem.
+However, using this fraction requires two runs over the data: one run to find 
@mymath{\overline{x}} and another run to find @mymath{\overline{x^2}} from 
@mymath{\overline{x}}, this can be slow.
+The advantage of the last fraction above, is that we can estimate both the 
first and second moments in one run (since the @mymath{-\overline{x}^2} term 
can easily be added later).
+
+The logarithmic nature of floating point number digitization creates a 
complication however: suppose the object is located between pixels 10000 and 
10020.
+Hence the target's pixels are only distributed over 20 pixels (with a standard 
deviation @mymath{<20}), while the mean has a value of @mymath{\sim10000}.
+The @mymath{\sum_iB_i^2x_i^2} will go to very very large values while the 
individual pixel differences will be orders of magnitude smaller.
+This will lower the accuracy of our calculation due to the limited accuracy of 
floating point operations.
+The variance only depends on the distance of each point from the mean, so we 
can shift all position by a constant/arbitrary @mymath{K} which is much closer 
to the mean: @mymath{\overline{x-K}=\overline{x}-K}.
+Hence we can calculate the second order moment using:
 
 @dispmath{ \overline{x^2}={\sum_iB_i(x_i-K)^2 \over \sum_iB_i} -
            (\overline{x}-K)^2 }
 
 @noindent
-The closer @mymath{K} is to @mymath{\overline{x}}, the better (the sums of
-squares will involve smaller numbers), as long as @mymath{K} is within the
-object limits (in the example above: @mymath{10000\leq{K}\leq10020}), the
-floating point error induced in our calculation will be negligible. For the
-most simplest implementation, MakeCatalog takes @mymath{K} to be the
-smallest position of the object in each dimension. Since @mymath{K} is
-arbitrary and an implementation/technical detail, we will ignore it for the
-remainder of this discussion.
+The closer @mymath{K} is to @mymath{\overline{x}}, the better (the sums of 
squares will involve smaller numbers), as long as @mymath{K} is within the 
object limits (in the example above: @mymath{10000\leq{K}\leq10020}), the 
floating point error induced in our calculation will be negligible.
+For the most simplest implementation, MakeCatalog takes @mymath{K} to be the 
smallest position of the object in each dimension.
+Since @mymath{K} is arbitrary and an implementation/technical detail, we will 
ignore it for the remainder of this discussion.
 
 In two dimensions, the mean and variances can be written as:
 
@@ -19660,20 +14345,11 @@ In two dimensions, the mean and variances can be 
written as:
           \overline{xy}={\sum_iB_ix_iy_i \over \sum_iB_i} -
           \overline{x}\times\overline{y}}
 
-If an elliptical profile's major axis exactly lies along the @mymath{x}
-axis, then @mymath{\overline{x^2}} will be directly proportional with the
-profile's major axis, @mymath{\overline{y^2}} with its minor axis and
-@mymath{\overline{xy}=0}. However, in reality we are not that lucky and
-(assuming galaxies can be parameterized as an ellipse) the major axis of
-galaxies can be in any direction on the image (in fact this is one of the
-core principles behind weak-lensing by shear estimation). So the purpose of
-the remainder of this section is to define a strategy to measure the
-position angle and axis ratio of some randomly positioned ellipses in an
-image, using the raw second moments that we have calculated above in our
-image coordinates.
-
-Let's assume we have rotated the galaxy by @mymath{\theta}, the new second
-order moments are:
+If an elliptical profile's major axis exactly lies along the @mymath{x} axis, 
then @mymath{\overline{x^2}} will be directly proportional with the profile's 
major axis, @mymath{\overline{y^2}} with its minor axis and 
@mymath{\overline{xy}=0}.
+However, in reality we are not that lucky and (assuming galaxies can be 
parameterized as an ellipse) the major axis of galaxies can be in any direction 
on the image (in fact this is one of the core principles behind weak-lensing by 
shear estimation).
+So the purpose of the remainder of this section is to define a strategy to 
measure the position angle and axis ratio of some randomly positioned ellipses 
in an image, using the raw second moments that we have calculated above in our 
image coordinates.
+
+Let's assume we have rotated the galaxy by @mymath{\theta}, the new second 
order moments are:
 
 @dispmath{\overline{x_\theta^2} = \overline{x^2}\cos^2\theta +
            \overline{y^2}\sin^2\theta -
@@ -19686,8 +14362,7 @@ order moments are:
            \overline{xy}(\cos^2\theta-\sin^2\theta)}
 
 @noindent
-The best @mymath{\theta} (@mymath{\theta_0}, where major axis lies
-along the @mymath{x_\theta} axis) can be found by:
+The best @mymath{\theta} (@mymath{\theta_0}, where major axis lies along the 
@mymath{x_\theta} axis) can be found by:
 
 @dispmath{\left.{\partial \overline{x_\theta^2} \over \partial 
\theta}\right|_{\theta_0}=0}
 Taking the derivative, we get:
@@ -19699,18 +14374,10 @@ Taking the derivative, we get:
 
 @cindex Position angle
 @noindent
-MakeCatalog uses the standard C math library's @code{atan2} function
-to estimate @mymath{\theta_0}, which we define as the position angle
-of the ellipse. To recall, this is the angle of the major axis of the
-ellipse with the @mymath{x} axis. By definition, when the elliptical
-profile is rotated by @mymath{\theta_0}, then
-@mymath{\overline{xy_{\theta_0}}=0},
-@mymath{\overline{x_{\theta_0}^2}} will be the extent of the maximum
-variance and @mymath{\overline{y_{\theta_0}^2}} the extent of the
-minimum variance (which are perpendicular for an ellipse). Replacing
-@mymath{\theta_0} in the equations above for
-@mymath{\overline{x_\theta}} and @mymath{\overline{y_\theta}}, we can
-get the semi-major (@mymath{A}) and semi-minor (@mymath{B}) lengths:
+MakeCatalog uses the standard C math library's @code{atan2} function to 
estimate @mymath{\theta_0}, which we define as the position angle of the 
ellipse.
+To recall, this is the angle of the major axis of the ellipse with the 
@mymath{x} axis.
+By definition, when the elliptical profile is rotated by @mymath{\theta_0}, 
then @mymath{\overline{xy_{\theta_0}}=0}, @mymath{\overline{x_{\theta_0}^2}} 
will be the extent of the maximum variance and 
@mymath{\overline{y_{\theta_0}^2}} the extent of the minimum variance (which 
are perpendicular for an ellipse).
+Replacing @mymath{\theta_0} in the equations above for 
@mymath{\overline{x_\theta}} and @mymath{\overline{y_\theta}}, we can get the 
semi-major (@mymath{A}) and semi-minor (@mymath{B}) lengths:
 
 @dispmath{A^2\equiv\overline{x_{\theta_0}^2}= {\overline{x^2} +
 \overline{y^2} \over 2} + \sqrt{\left({\overline{x^2}-\overline{y^2} \over 
2}\right)^2 + \overline{xy}^2}}
@@ -19718,14 +14385,8 @@ get the semi-major (@mymath{A}) and semi-minor 
(@mymath{B}) lengths:
 @dispmath{B^2\equiv\overline{y_{\theta_0}^2}= {\overline{x^2} +
 \overline{y^2} \over 2} - \sqrt{\left({\overline{x^2}-\overline{y^2} \over 
2}\right)^2 + \overline{xy}^2}}
 
-As a summary, it is important to remember that the units of @mymath{A} and
-@mymath{B} are in pixels (the standard deviation of a positional
-distribution) and that they represent the spatial light distribution of the
-object in both image dimensions (rotated by @mymath{\theta_0}). When the
-object cannot be represented as an ellipse, this interpretation breaks
-down: @mymath{\overline{xy_{\theta_0}}\neq0} and
-@mymath{\overline{y_{\theta_0}^2}} will not be the direction of minimum
-variance.
+As a summary, it is important to remember that the units of @mymath{A} and 
@mymath{B} are in pixels (the standard deviation of a positional distribution) 
and that they represent the spatial light distribution of the object in both 
image dimensions (rotated by @mymath{\theta_0}).
+When the object cannot be represented as an ellipse, this interpretation 
breaks down: @mymath{\overline{xy_{\theta_0}}\neq0} and 
@mymath{\overline{y_{\theta_0}^2}} will not be the direction of minimum 
variance.
 
 
 
@@ -19734,91 +14395,58 @@ variance.
 @node Adding new columns to MakeCatalog, Invoking astmkcatalog, Measuring 
elliptical parameters, MakeCatalog
 @subsection Adding new columns to MakeCatalog
 
-MakeCatalog is designed to allow easy addition of different measurements
-over a labeled image (see @url{https://arxiv.org/abs/1611.06387v1, Akhlaghi
-[2016]}). A check-list style description of necessary steps to do that is
-described in this section. The common development characteristics of
-MakeCatalog and other Gnuastro programs is explained in
-@ref{Developing}. We strongly encourage you to have a look at that chapter
-to greatly simplify your navigation in the code. After adding and testing
-your column, you are most welcome (and encouraged) to share it with us so
-we can add to the next release of Gnuastro for everyone else to also
-benefit from your efforts.
-
-MakeCatalog will first pass over each label's pixels two times and do
-necessary raw/internal calculations. Once the passes are done, it will use
-the raw information for filling the final catalog's columns. In the first
-pass it will gather mainly object information and in the second run, it
-will mainly focus on the clumps, or any other measurement that needs an
-output from the first pass. These two passes are designed to be raw
-summations: no extra processing. This will allow parallel processing and
-simplicity/clarity. So if your new calculation, needs new raw information
-from the pixels, then you will need to also modify the respective
-@code{mkcatalog_first_pass} and @code{mkcatalog_second_pass} functions
-(both in @file{bin/mkcatalog/mkcatalog.c}) and define new raw table columns
-in @file{main.h} (hopefully the comments in the code are clear enough).
-
-In all these different places, the final columns are sorted in the same
-order (same order as @ref{Invoking astmkcatalog}). This allows a particular
-column/option to be easily found in all steps. Therefore in adding your new
-option, be sure to keep it in the same relative place in the list in all
-the separate places (it doesn't necessarily have to be in the end), and
-near conceptually similar options.
+MakeCatalog is designed to allow easy addition of different measurements over 
a labeled image (see @url{https://arxiv.org/abs/1611.06387v1, Akhlaghi [2016]}).
+A check-list style description of necessary steps to do that is described in 
this section.
+The common development characteristics of MakeCatalog and other Gnuastro 
programs is explained in @ref{Developing}.
+We strongly encourage you to have a look at that chapter to greatly simplify 
your navigation in the code.
+After adding and testing your column, you are most welcome (and encouraged) to 
share it with us so we can add to the next release of Gnuastro for everyone 
else to also benefit from your efforts.
+
+MakeCatalog will first pass over each label's pixels two times and do 
necessary raw/internal calculations.
+Once the passes are done, it will use the raw information for filling the 
final catalog's columns.
+In the first pass it will gather mainly object information and in the second 
run, it will mainly focus on the clumps, or any other measurement that needs an 
output from the first pass.
+These two passes are designed to be raw summations: no extra processing.
+This will allow parallel processing and simplicity/clarity.
+So if your new calculation, needs new raw information from the pixels, then 
you will need to also modify the respective @code{mkcatalog_first_pass} and 
@code{mkcatalog_second_pass} functions (both in 
@file{bin/mkcatalog/mkcatalog.c}) and define new raw table columns in 
@file{main.h} (hopefully the comments in the code are clear enough).
+
+In all these different places, the final columns are sorted in the same order 
(same order as @ref{Invoking astmkcatalog}).
+This allows a particular column/option to be easily found in all steps.
+Therefore in adding your new option, be sure to keep it in the same relative 
place in the list in all the separate places (it doesn't necessarily have to be 
in the end), and near conceptually similar options.
 
 @table @file
 
 @item main.h
-The @code{objectcols} and @code{clumpcols} enumerated variables
-(@code{enum}) define the raw/internal calculation columns. If your new
-column requires new raw calculations, add a row to the respective list. If
-your calculation requires any other settings parameters, you should add a
-variable to the @code{mkcatalogparams} structure.
+The @code{objectcols} and @code{clumpcols} enumerated variables (@code{enum}) 
define the raw/internal calculation columns.
+If your new column requires new raw calculations, add a row to the respective 
list.
+If your calculation requires any other settings parameters, you should add a 
variable to the @code{mkcatalogparams} structure.
 
 @item ui.c
-If the new column needs raw calculations (an entry was added in
-@code{objectcols} and @code{clumpcols}), specify which inputs it needs in
-@code{ui_necessary_inputs}, similar to the other options. Afterwards, if
-your column includes any particular settings (you needed to add a variable
-to the @code{mkcatalogparams} structure in @file{main.h}), you should do
-the sanity checks and preparations for it here.
+If the new column needs raw calculations (an entry was added in 
@code{objectcols} and @code{clumpcols}), specify which inputs it needs in 
@code{ui_necessary_inputs}, similar to the other options.
+Afterwards, if your column includes any particular settings (you needed to add 
a variable to the @code{mkcatalogparams} structure in @file{main.h}), you 
should do the sanity checks and preparations for it here.
 
 @item ui.h
-The @code{option_keys_enum} associates a unique value for each option to
-MakeCatalog. The options that have a short option version, the single
-character short comment is used for the value. Those that don't have a
-short option version, get a large integer automatically. You should add a
-variable here to identify your desired column.
+The @code{option_keys_enum} associates a unique value for each option to 
MakeCatalog.
+The options that have a short option version, the single character short 
comment is used for the value.
+Those that don't have a short option version, get a large integer 
automatically.
+You should add a variable here to identify your desired column.
 
 
 @cindex GNU C library
 @item args.h
-This file specifies all the parameters for the GNU C library, Argp
-structure that is in charge of reading the user's options. To define your
-new column, just copy an existing set of parameters and change the first,
-second and 5th values (the only ones that differ between all the columns),
-you should use the macro you defined in @file{ui.h} here.
+This file specifies all the parameters for the GNU C library, Argp structure 
that is in charge of reading the user's options.
+To define your new column, just copy an existing set of parameters and change 
the first, second and 5th values (the only ones that differ between all the 
columns), you should use the macro you defined in @file{ui.h} here.
 
 
 @item columns.c
-This file contains the main definition and high-level calculation of your
-new column through the @code{columns_define_alloc} and @code{columns_fill}
-functions. In the first, you specify the basic information about the
-column: its name, units, comments, type (see @ref{Numeric data types}) and
-how it should be printed if the output is a text file. You should also
-specify the raw/internal columns that are necessary for this column here as
-the many existing examples show. Through the types for objects and rows,
-you can specify if this column is only for clumps, objects or both.
-
-The second main function (@code{columns_fill}) writes the final value into
-the appropriate column for each object and clump. As you can see in the
-many existing examples, you can define your processing on the raw/internal
-calculations here and save them in the output.
+This file contains the main definition and high-level calculation of your new 
column through the @code{columns_define_alloc} and @code{columns_fill} 
functions.
+In the first, you specify the basic information about the column: its name, 
units, comments, type (see @ref{Numeric data types}) and how it should be 
printed if the output is a text file.
+You should also specify the raw/internal columns that are necessary for this 
column here as the many existing examples show.
+Through the types for objects and rows, you can specify if this column is only 
for clumps, objects or both.
+
+The second main function (@code{columns_fill}) writes the final value into the 
appropriate column for each object and clump.
+As you can see in the many existing examples, you can define your processing 
on the raw/internal calculations here and save them in the output.
 
 @item mkcatalog.c
-As described before, this file contains the two main MakeCatalog
-work-horses: @code{mkcatalog_first_pass} and @code{mkcatalog_second_pass},
-their names are descriptive enough and their internals are also clear and
-heavily commented.
+As described before, this file contains the two main MakeCatalog work-horses: 
@code{mkcatalog_first_pass} and @code{mkcatalog_second_pass}, their names are 
descriptive enough and their internals are also clear and heavily commented.
 
 @item doc/gnuastro.texi
 Update this manual and add a description for the new column.
@@ -19832,9 +14460,8 @@ Update this manual and add a description for the new 
column.
 @node Invoking astmkcatalog,  , Adding new columns to MakeCatalog, MakeCatalog
 @subsection Invoking MakeCatalog
 
-MakeCatalog will do measurements and produce a catalog from a labeled
-dataset and optional values dataset(s). The executable name is
-@file{astmkcatalog} with the following general template
+MakeCatalog will do measurements and produce a catalog from a labeled dataset 
and optional values dataset(s).
+The executable name is @file{astmkcatalog} with the following general template
 
 @example
 $ astmkcatalog [OPTION ...] InputImage.fits
@@ -19866,23 +14493,16 @@ $ astmkcatalog K_segmented.fits --hdu=DETECTIONS 
--clumpscat     \
 
 @cindex Gaussian
 @noindent
-If MakeCatalog is to do processing (not printing help or option values), an
-input labeled image should be provided. The options described in this
-section are those that are particular to MakeProfiles. For operations that
-MakeProfiles shares with other programs (mainly involving input/output or
-general processing steps), see @ref{Common options}. Also see @ref{Common
-program behavior} for some general characteristics of all Gnuastro programs
-including MakeCatalog.
-
-The various measurements/columns of MakeCatalog are requested as options,
-either on the command-line or in configuration files, see
-@ref{Configuration files}. The full list of available columns is available
-in @ref{MakeCatalog measurements}. Depending on the requested columns,
-MakeCatalog needs more than one input dataset, for more details, please see
-@ref{MakeCatalog inputs and basic settings}. The upper-limit measurements
-in particular need several configuration options which are thoroughly
-discussed in @ref{Upper-limit settings}. Finally, in @ref{MakeCatalog
-output} the output file(s) created by MakeCatalog are discussed.
+If MakeCatalog is to do processing (not printing help or option values), an 
input labeled image should be provided.
+The options described in this section are those that are particular to 
MakeProfiles.
+For operations that MakeProfiles shares with other programs (mainly involving 
input/output or general processing steps), see @ref{Common options}.
+Also see @ref{Common program behavior} for some general characteristics of all 
Gnuastro programs including MakeCatalog.
+
+The various measurements/columns of MakeCatalog are requested as options, 
either on the command-line or in configuration files, see @ref{Configuration 
files}.
+The full list of available columns is available in @ref{MakeCatalog 
measurements}.
+Depending on the requested columns, MakeCatalog needs more than one input 
dataset, for more details, please see @ref{MakeCatalog inputs and basic 
settings}.
+The upper-limit measurements in particular need several configuration options 
which are thoroughly discussed in @ref{Upper-limit settings}.
+Finally, in @ref{MakeCatalog output} the output file(s) created by MakeCatalog 
are discussed.
 
 @menu
 * MakeCatalog inputs and basic settings::  Input files and basic settings.
@@ -19894,149 +14514,93 @@ output} the output file(s) created by MakeCatalog 
are discussed.
 @node MakeCatalog inputs and basic settings, Upper-limit settings, Invoking 
astmkcatalog, Invoking astmkcatalog
 @subsubsection MakeCatalog inputs and basic settings
 
-MakeCatalog works by using a localized/labeled dataset (see
-@ref{MakeCatalog}). This dataset maps/labels pixels to a specific target
-(row number in the final catalog) and is thus the only necessary input
-dataset to produce a minimal catalog in any situation. Because it only has
-labels/counters, it must have an integer type (see @ref{Numeric data
-types}), see below if your labels are in a floating point container. When
-the requested measurements only need this dataset (for example
-@option{--geox}, @option{--geoy}, or @option{--geoarea}), MakeCatalog won't
-read any more datasets.
-
-Low-level measurements that only use the labeled image are rarely
-sufficient for any high-level science case. Therefore necessary input
-datasets depend on the requested columns in each run. For example, let's
-assume you want the brightness/magnitude and signal-to-noise ratio of your
-labeled regions. For these columns, you will also need to provide an extra
-dataset containing values for every pixel of the labeled input (to measure
-brightness) and another for the Sky standard deviation (to measure
-error). All such auxiliary input files have to have the same size (number
-of pixels in each dimension) as the input labeled image. Their numeric data
-type is irrelevant (they will be converted to 32-bit floating point
-internally). For the full list of available measurements, see
-@ref{MakeCatalog measurements}.
-
-The ``values'' dataset is used for measurements like brightness/magnitude,
-or flux-weighted positions. If it is a real image, by default it is assumed
-to be already Sky-subtracted prior to running MakeCatalog. If it isn't, you
-use the @option{--subtractsky} option to, so MakeCatalog reads and
-subtracts the Sky dataset before any processing. To obtain the Sky value,
-you can use the @option{--sky} option of @ref{Statistics}, but the best
-recommended method is @ref{NoiseChisel}, see @ref{Sky value}.
-
-MakeCatalog can also do measurements on sub-structures of detections. In
-other words, it can produce two catalogs. Following the nomenclature of
-Segment (see @ref{Segment}), the main labeled input dataset is known as
-``object'' labels and the (optional) sub-structure input dataset is known
-as ``clumps''. If MakeCatalog is run with the @option{--clumpscat} option,
-it will also need a labeled image containing clumps, similar to what
-Segment produces (see @ref{Segment output}). Since clumps are defined
-within detected regions (they exist over signal, not noise), MakeCatalog
-uses their boundaries to subtract the level of signal under them.
-
-There are separate options to explicitly request a file name and
-HDU/extension for each of the required input datasets as fully described
-below (with the @option{--*file} format). When each dataset is in a
-separate file, these options are necessary. However, one great advantage of
-the FITS file format (that is heavily used in astronomy) is that it allows
-the storage of multiple datasets in one file. So in most situations (for
-example if you are using the outputs of @ref{NoiseChisel} or
-@ref{Segment}), all the necessary input datasets can be in one file.
-
-When none of the @option{--*file} options are given, MakeCatalog will
-assume the necessary input datasets are in the file given as its argument
-(without any option). When the Sky or Sky standard deviation datasets are
-necessary and the only @option{--*file} option called is
-@option{--valuesfile}, MakeCatalog will search for these datasets (with the
-default/given HDUs) in the file given to @option{--valuesfile} (before
-looking into the the main argument file).
-
-When the clumps image (necessary with the @option{--clumpscat} option) is
-used, MakeCatalog looks into the (possibly existing) @code{NUMLABS} keyword
-for the total number of clumps in the image (irrespective of how many
-objects there are). If its not present, it will count them and possibly
-re-label the clumps so the clump labels always start with 1 and finish with
-the total number of clumps in each object. The re-labeled clumps image will
-be stored with the @file{-clumps-relab.fits} suffix. This can slightly
-slow-down the run.
-
-Note that @code{NUMLABS} is automatically written by Segment in its
-outputs, so if you are feeding Segment's clump labels, you can benefit from
-the improved speed. Otherwise, if you are creating the clumps label dataset
-manually, it may be good to include the @code{NUMLABS} keyword in its
-header and also be sure that there is no gap in the clump labels. For
-example if an object has three clumps, they are labeled as 1, 2, 3. If they
-are labeled as 1, 3, 4, or any other combination of three positive integers
-that aren't an increment of the previous, you might get unknown behavior.
-
-It may happen that your labeled objects image was created with a program
-that only outputs floating point files. However, you know it only has
-integer valued pixels that are stored in a floating point container. In
-such cases, you can use Gnuastro's Arithmetic program (see
-@ref{Arithmetic}) to change the numerical data type of the image
-(@file{float.fits}) to an integer type image (@file{int.fits}) with a
-command like below:
+MakeCatalog works by using a localized/labeled dataset (see @ref{MakeCatalog}).
+This dataset maps/labels pixels to a specific target (row number in the final 
catalog) and is thus the only necessary input dataset to produce a minimal 
catalog in any situation.
+Because it only has labels/counters, it must have an integer type (see 
@ref{Numeric data types}), see below if your labels are in a floating point 
container.
+When the requested measurements only need this dataset (for example 
@option{--geox}, @option{--geoy}, or @option{--geoarea}), MakeCatalog won't 
read any more datasets.
+
+Low-level measurements that only use the labeled image are rarely sufficient 
for any high-level science case.
+Therefore necessary input datasets depend on the requested columns in each run.
+For example, let's assume you want the brightness/magnitude and 
signal-to-noise ratio of your labeled regions.
+For these columns, you will also need to provide an extra dataset containing 
values for every pixel of the labeled input (to measure brightness) and another 
for the Sky standard deviation (to measure error).
+All such auxiliary input files have to have the same size (number of pixels in 
each dimension) as the input labeled image.
+Their numeric data type is irrelevant (they will be converted to 32-bit 
floating point internally).
+For the full list of available measurements, see @ref{MakeCatalog 
measurements}.
+
+The ``values'' dataset is used for measurements like brightness/magnitude, or 
flux-weighted positions.
+If it is a real image, by default it is assumed to be already Sky-subtracted 
prior to running MakeCatalog.
+If it isn't, you use the @option{--subtractsky} option to, so MakeCatalog 
reads and subtracts the Sky dataset before any processing.
+To obtain the Sky value, you can use the @option{--sky} option of 
@ref{Statistics}, but the best recommended method is @ref{NoiseChisel}, see 
@ref{Sky value}.
+
+MakeCatalog can also do measurements on sub-structures of detections.
+In other words, it can produce two catalogs.
+Following the nomenclature of Segment (see @ref{Segment}), the main labeled 
input dataset is known as ``object'' labels and the (optional) sub-structure 
input dataset is known as ``clumps''.
+If MakeCatalog is run with the @option{--clumpscat} option, it will also need 
a labeled image containing clumps, similar to what Segment produces (see 
@ref{Segment output}).
+Since clumps are defined within detected regions (they exist over signal, not 
noise), MakeCatalog uses their boundaries to subtract the level of signal under 
them.
+
+There are separate options to explicitly request a file name and HDU/extension 
for each of the required input datasets as fully described below (with the 
@option{--*file} format).
+When each dataset is in a separate file, these options are necessary.
+However, one great advantage of the FITS file format (that is heavily used in 
astronomy) is that it allows the storage of multiple datasets in one file.
+So in most situations (for example if you are using the outputs of 
@ref{NoiseChisel} or @ref{Segment}), all the necessary input datasets can be in 
one file.
+
+When none of the @option{--*file} options are given, MakeCatalog will assume 
the necessary input datasets are in the file given as its argument (without any 
option).
+When the Sky or Sky standard deviation datasets are necessary and the only 
@option{--*file} option called is @option{--valuesfile}, MakeCatalog will 
search for these datasets (with the default/given HDUs) in the file given to 
@option{--valuesfile} (before looking into the main argument file).
+
+When the clumps image (necessary with the @option{--clumpscat} option) is 
used, MakeCatalog looks into the (possibly existing) @code{NUMLABS} keyword for 
the total number of clumps in the image (irrespective of how many objects there 
are).
+If its not present, it will count them and possibly re-label the clumps so the 
clump labels always start with 1 and finish with the total number of clumps in 
each object.
+The re-labeled clumps image will be stored with the @file{-clumps-relab.fits} 
suffix.
+This can slightly slow-down the run.
+
+Note that @code{NUMLABS} is automatically written by Segment in its outputs, 
so if you are feeding Segment's clump labels, you can benefit from the improved 
speed.
+Otherwise, if you are creating the clumps label dataset manually, it may be 
good to include the @code{NUMLABS} keyword in its header and also be sure that 
there is no gap in the clump labels.
+For example if an object has three clumps, they are labeled as 1, 2, 3.
+If they are labeled as 1, 3, 4, or any other combination of three positive 
integers that aren't an increment of the previous, you might get unknown 
behavior.
+
+It may happen that your labeled objects image was created with a program that 
only outputs floating point files.
+However, you know it only has integer valued pixels that are stored in a 
floating point container.
+In such cases, you can use Gnuastro's Arithmetic program (see 
@ref{Arithmetic}) to change the numerical data type of the image 
(@file{float.fits}) to an integer type image (@file{int.fits}) with a command 
like below:
 
 @example
 @command{$ astarithmetic float.fits int32 --output=int.fits}
 @end example
 
-To summarize: if the input file to MakeCatalog is the default/full output
-of Segment (see @ref{Segment output}) you don't have to worry about any of
-the @option{--*file} options below. You can just give Segment's output file
-to MakeCatalog as described in @ref{Invoking astmkcatalog}. To feed
-NoiseChisel's output into MakeCatalog, just change the labeled dataset's
-header (with @option{--hdu=DETECTIONS}). The full list of input dataset
-options and general setting options are described below.
+To summarize: if the input file to MakeCatalog is the default/full output of 
Segment (see @ref{Segment output}) you don't have to worry about any of the 
@option{--*file} options below.
+You can just give Segment's output file to MakeCatalog as described in 
@ref{Invoking astmkcatalog}.
+To feed NoiseChisel's output into MakeCatalog, just change the labeled 
dataset's header (with @option{--hdu=DETECTIONS}).
+The full list of input dataset options and general setting options are 
described below.
 
 @table @option
 
 @item -l STR
 @itemx --clumpsfile=STR
-The file containing the labeled clumps dataset when @option{--clumpscat} is
-called (see @ref{MakeCatalog output}). When @option{--clumpscat} is called,
-but this option isn't, MakeCatalog will look into the main input file
-(given as an argument) for the required extension/HDU (value to
-@option{--clumpshdu}).
+The file containing the labeled clumps dataset when @option{--clumpscat} is 
called (see @ref{MakeCatalog output}).
+When @option{--clumpscat} is called, but this option isn't, MakeCatalog will 
look into the main input file (given as an argument) for the required 
extension/HDU (value to @option{--clumpshdu}).
 
 @item --clumpshdu=STR
-The HDU/extension of the clump labels dataset. Only pixels with values
-above zero will be considered. The clump labels dataset has to be an
-integer data type (see @ref{Numeric data types}) and only pixels with a
-value larger than zero will be used. See @ref{Segment output} for a
-description of the expected format.
+The HDU/extension of the clump labels dataset.
+Only pixels with values above zero will be considered.
+The clump labels dataset has to be an integer data type (see @ref{Numeric data 
types}) and only pixels with a value larger than zero will be used.
+See @ref{Segment output} for a description of the expected format.
 
 @item -v STR
 @itemx --valuesfile=STR
-The file name of the (sky-subtracted) values dataset. When any of the
-columns need values to associate with the input labels (for example to
-measure the brightness/magnitude of a galaxy), MakeCatalog will look into a
-``values'' for the respective pixel values. In most common processing, this
-is the actual astronomical image that the labels were defined, or detected,
-over. The HDU/extension of this dataset in the given file can be specified
-with @option{--valueshdu}. If this option is not called, MakeCatalog will
-look for the given extension in the main input file.
+The file name of the (sky-subtracted) values dataset.
+When any of the columns need values to associate with the input labels (for 
example to measure the brightness/magnitude of a galaxy), MakeCatalog will look 
into a ``values'' for the respective pixel values.
+In most common processing, this is the actual astronomical image that the 
labels were defined, or detected, over.
+The HDU/extension of this dataset in the given file can be specified with 
@option{--valueshdu}.
+If this option is not called, MakeCatalog will look for the given extension in 
the main input file.
 
 @item --valueshdu=STR/INT
-The name or number (counting from zero) of the extension containing the
-``values'' dataset, see the descriptions above and those in
-@option{--valuesfile} for more.
+The name or number (counting from zero) of the extension containing the 
``values'' dataset, see the descriptions above and those in 
@option{--valuesfile} for more.
 
 @item -s STR/FLT
 @itemx --insky=STR/FLT
-Sky value as a single number, or the file name containing a dataset
-(different values per pixel or tile). The Sky dataset is only necessary
-when @option{--subtractsky} is called or when a column directly related to
-the Sky value is requested (currently @option{--sky}). This dataset may be
-a tessellation, with one element per tile (see @option{--oneelempertile} of
-NoiseChisel's @ref{Processing options}).
-
-When the Sky dataset is necessary but this option is not called,
-MakeCatalog will assume it is an HDU/extension (specified by
-@option{--skyhdu}) in one of the already given files. First it will look
-for it in the @option{--valuesfile} (if it is given) and then the main
-input file (given as an argument).
+Sky value as a single number, or the file name containing a dataset (different 
values per pixel or tile).
+The Sky dataset is only necessary when @option{--subtractsky} is called or 
when a column directly related to the Sky value is requested (currently 
@option{--sky}).
+This dataset may be a tessellation, with one element per tile (see 
@option{--oneelempertile} of NoiseChisel's @ref{Processing options}).
+
+When the Sky dataset is necessary but this option is not called, MakeCatalog 
will assume it is an HDU/extension (specified by @option{--skyhdu}) in one of 
the already given files.
+First it will look for it in the @option{--valuesfile} (if it is given) and 
then the main input file (given as an argument).
 
 By default the values dataset is assumed to be already Sky subtracted, so
 this dataset is not necessary for many of the columns.
@@ -20050,32 +14614,23 @@ processing.
 
 @item -t STR/FLT
 @itemx --instd=STR/FLT
-Sky standard deviation value as a single number, or the file name
-containing a dataset (different values per pixel or tile). With the
-@option{--variance} option you can tell MakeCatalog to interpret this
-value/dataset as a variance image, not standard deviation.
-
-@strong{Important note:} This must only be the SKY standard deviation or
-variance (not including the signal's contribution to the error). In other
-words, the final standard deviation of a pixel depends on how much signal
-there is in it. MakeCatalog will find the amount of signal within each
-pixel (while subtracting the Sky, if @option{--subtractsky} is called) and
-account for the extra error due to it's value (signal). Therefore if the
-input standard deviation (or variance) image also contains the contribution
-of signal to the error, then the final error measurements will be
-over-estimated.
+Sky standard deviation value as a single number, or the file name containing a 
dataset (different values per pixel or tile).
+With the @option{--variance} option you can tell MakeCatalog to interpret this 
value/dataset as a variance image, not standard deviation.
+
+@strong{Important note:} This must only be the SKY standard deviation or 
variance (not including the signal's contribution to the error).
+In other words, the final standard deviation of a pixel depends on how much 
signal there is in it.
+MakeCatalog will find the amount of signal within each pixel (while 
subtracting the Sky, if @option{--subtractsky} is called) and account for the 
extra error due to it's value (signal).
+Therefore if the input standard deviation (or variance) image also contains 
the contribution of signal to the error, then the final error measurements will 
be over-estimated.
 
 @item --stdhdu=STR
 The HDU of the Sky value standard deviation image.
 
 @item --variance
-The dataset given to @option{--stdfile} (and @option{--stdhdu} has the Sky
-variance of every pixel, not the Sky standard deviation.
+The dataset given to @option{--stdfile} (and @option{--stdhdu} has the Sky 
variance of every pixel, not the Sky standard deviation.
 
 @item -z FLT
 @itemx --zeropoint=FLT
-The zero point magnitude for the input image, see @ref{Flux Brightness and
-magnitude}.
+The zero point magnitude for the input image, see @ref{Flux Brightness and 
magnitude}.
 
 @end table
 
@@ -20083,139 +14638,86 @@ magnitude}.
 @node Upper-limit settings, MakeCatalog measurements, MakeCatalog inputs and 
basic settings, Invoking astmkcatalog
 @subsubsection Upper-limit settings
 
-The upper-limit magnitude was discussed in @ref{Quantifying measurement
-limits}. Unlike other measured values/columns in MakeCatalog, the upper
-limit magnitude needs several extra parameters which are discussed
-here. All the options specific to the upper-limit measurements start with
-@option{up} for ``upper-limit''. The only exception is @option{--envseed}
-that is also present in other programs and is general for any job requiring
-random number generation in Gnuastro (see @ref{Generating random numbers}).
+The upper-limit magnitude was discussed in @ref{Quantifying measurement 
limits}.
+Unlike other measured values/columns in MakeCatalog, the upper limit magnitude 
needs several extra parameters which are discussed here.
+All the options specific to the upper-limit measurements start with 
@option{up} for ``upper-limit''.
+The only exception is @option{--envseed} that is also present in other 
programs and is general for any job requiring random number generation in 
Gnuastro (see @ref{Generating random numbers}).
 
 @cindex Reproducibility
-One very important consideration in Gnuastro is reproducibility. Therefore,
-the values to all of these parameters along with others (like the random
-number generator type and seed) are also reported in the comments of the
-final catalog when the upper limit magnitude column is desired. The random
-seed that is used to define the random positions for each object or clump
-is unique and set based on the (optionally) given seed, the total number of
-objects and clumps and also the labels of the clumps and objects. So with
-identical inputs, an identical upper-limit magnitude will be
-found. However, even if the seed is identical, when the ordering of the
-object/clump labels differs between different runs, the result of
-upper-limit measurements will not be identical.
-
-MakeCatalog will randomly place the object/clump footprint over the
-dataset. When the randomly placed footprint doesn't fall on any object or
-masked region (see @option{--upmaskfile}) it will be used in the final
-distribution. Otherwise that particular random position will be ignored and
-another random position will be generated. Finally, when the distribution
-has the desired number of successfully measured random samples
-(@option{--upnum}) the distribution's properties will be measured and
-placed in the catalog.
-
-When the profile is very large or the image is significantly covered by
-detections, it might not be possible to find the desired number of
-samplings in a reasonable time. MakeProfiles will continue searching until
-it is unable to find a successful position (since the last successful
-measurement@footnote{The counting of failed positions restarts on every
-successful measurement.}), for a large multiple of @option{--upnum}
-(currently@footnote{In Gnuastro's source, this constant number is defined
-as the @code{MKCATALOG_UPPERLIMIT_MAXFAILS_MULTIP} macro in
-@file{bin/mkcatalog/main.h}, see @ref{Downloading the source}.} this is
-10). If @option{--upnum} successful samples cannot be found until this
-limit is reached, MakeCatalog will set the upper-limit magnitude for that
-object to NaN (blank).
-
-MakeCatalog will also print a warning if the range of positions available
-for the labeled region is smaller than double the size of the region. In
-such cases, the limited range of random positions can artificially decrease
-the standard deviation of the final distribution. If your dataset can allow
-it (it is large enough), it is recommended to use a larger range if you see
-such warnings.
+One very important consideration in Gnuastro is reproducibility.
+Therefore, the values to all of these parameters along with others (like the 
random number generator type and seed) are also reported in the comments of the 
final catalog when the upper limit magnitude column is desired.
+The random seed that is used to define the random positions for each object or 
clump is unique and set based on the (optionally) given seed, the total number 
of objects and clumps and also the labels of the clumps and objects.
+So with identical inputs, an identical upper-limit magnitude will be found.
+However, even if the seed is identical, when the ordering of the object/clump 
labels differs between different runs, the result of upper-limit measurements 
will not be identical.
+
+MakeCatalog will randomly place the object/clump footprint over the dataset.
+When the randomly placed footprint doesn't fall on any object or masked region 
(see @option{--upmaskfile}) it will be used in the final distribution.
+Otherwise that particular random position will be ignored and another random 
position will be generated.
+Finally, when the distribution has the desired number of successfully measured 
random samples (@option{--upnum}) the distribution's properties will be 
measured and placed in the catalog.
+
+When the profile is very large or the image is significantly covered by 
detections, it might not be possible to find the desired number of samplings in 
a reasonable time.
+MakeProfiles will continue searching until it is unable to find a successful 
position (since the last successful measurement@footnote{The counting of failed 
positions restarts on every successful measurement.}), for a large multiple of 
@option{--upnum} (currently@footnote{In Gnuastro's source, this constant number 
is defined as the @code{MKCATALOG_UPPERLIMIT_MAXFAILS_MULTIP} macro in 
@file{bin/mkcatalog/main.h}, see @ref{Downloading the source}.} this is 10).
+If @option{--upnum} successful samples cannot be found until this limit is 
reached, MakeCatalog will set the upper-limit magnitude for that object to NaN 
(blank).
+
+MakeCatalog will also print a warning if the range of positions available for 
the labeled region is smaller than double the size of the region.
+In such cases, the limited range of random positions can artificially decrease 
the standard deviation of the final distribution.
+If your dataset can allow it (it is large enough), it is recommended to use a 
larger range if you see such warnings.
 
 @table @option
 
 @item --upmaskfile=STR
-File name of mask image to use for upper-limit calculation. In some cases
-(especially when doing matched photometry), the object labels specified in
-the main input and mask image might not be adequate. In other words they do
-not necessarily have to cover @emph{all} detected objects: the user might
-have selected only a few of the objects in their labeled image. This option
-can be used to ignore regions in the image in these situations when
-estimating the upper-limit magnitude. All the non-zero pixels of the image
-specified by this option (in the @option{--upmaskhdu} extension) will be
-ignored in the upper-limit magnitude measurements.
-
-For example, when you are using labels from another image, you can give
-NoiseChisel's objects image output for this image as the value to this
-option. In this way, you can be sure that regions with data do not harm
-your distribution. See @ref{Quantifying measurement limits} for more on the
-upper limit magnitude.
+File name of mask image to use for upper-limit calculation.
+In some cases (especially when doing matched photometry), the object labels 
specified in the main input and mask image might not be adequate.
+In other words they do not necessarily have to cover @emph{all} detected 
objects: the user might have selected only a few of the objects in their 
labeled image.
+This option can be used to ignore regions in the image in these situations 
when estimating the upper-limit magnitude.
+All the non-zero pixels of the image specified by this option (in the 
@option{--upmaskhdu} extension) will be ignored in the upper-limit magnitude 
measurements.
+
+For example, when you are using labels from another image, you can give 
NoiseChisel's objects image output for this image as the value to this option.
+In this way, you can be sure that regions with data do not harm your 
distribution.
+See @ref{Quantifying measurement limits} for more on the upper limit magnitude.
 
 @item --upmaskhdu=STR
 The extension in the file specified by @option{--upmask}.
 
 @item --upnum=INT
-The number of random samples to take for all the objects. A larger value to
-this option will give a more accurate result (asymptotically), but it will
-also slow down the process. When a randomly positioned sample overlaps with
-a detected/masked pixel it is not counted and another random position is
-found until the object completely lies over an undetected region. So you
-can be sure that for each object, this many samples over undetected objects
-are made. See the upper limit magnitude discussion in @ref{Quantifying
-measurement limits} for more.
+The number of random samples to take for all the objects.
+A larger value to this option will give a more accurate result 
(asymptotically), but it will also slow down the process.
+When a randomly positioned sample overlaps with a detected/masked pixel it is 
not counted and another random position is found until the object completely 
lies over an undetected region.
+So you can be sure that for each object, this many samples over undetected 
objects are made.
+See the upper limit magnitude discussion in @ref{Quantifying measurement 
limits} for more.
 
 @item --uprange=INT,INT
-The range/width of the region (in pixels) to do random sampling along each
-dimension of the input image around each object's position. This is not a
-mandatory option and if not given (or given a value of zero in a
-dimension), the full possible range of the dataset along that dimension
-will be used. This is useful when the noise properties of the dataset vary
-gradually. In such cases, using the full range of the input dataset is
-going to bias the result. However, note that decreasing the the range of
-available positions too much will also artificially decrease the standard
-deviation of the final distribution (and thus bias the upper-limit
-measurement).
+The range/width of the region (in pixels) to do random sampling along each 
dimension of the input image around each object's position.
+This is not a mandatory option and if not given (or given a value of zero in a 
dimension), the full possible range of the dataset along that dimension will be 
used.
+This is useful when the noise properties of the dataset vary gradually.
+In such cases, using the full range of the input dataset is going to bias the 
result.
+However, note that decreasing the range of available positions too much will 
also artificially decrease the standard deviation of the final distribution 
(and thus bias the upper-limit measurement).
 
 @item --envseed
 @cindex Seed, Random number generator
 @cindex Random number generator, Seed
-Read the random number generator type and seed value from the environment
-(see @ref{Generating random numbers}). Random numbers are used in
-calculating the random positions of different samples of each object.
+Read the random number generator type and seed value from the environment (see 
@ref{Generating random numbers}).
+Random numbers are used in calculating the random positions of different 
samples of each object.
 
 @item --upsigmaclip=FLT,FLT
-The raw distribution of random values will not be used to find the
-upper-limit magnitude, it will first be @mymath{\sigma}-clipped (see
-@ref{Sigma clipping}) to avoid outliers in the distribution (mainly the
-faint undetected wings of bright/large objects in the image). This option
-takes two values: the first is the multiple of @mymath{\sigma}, and the
-second is the termination criteria. If the latter is larger than 1, it is
-read as an integer number and will be the number of times to clip. If it is
-smaller than 1, it is interpreted as the tolerance level to stop
-clipping. See @ref{Sigma clipping} for a complete explanation.
+The raw distribution of random values will not be used to find the upper-limit 
magnitude, it will first be @mymath{\sigma}-clipped (see @ref{Sigma clipping}) 
to avoid outliers in the distribution (mainly the faint undetected wings of 
bright/large objects in the image).
+This option takes two values: the first is the multiple of @mymath{\sigma}, 
and the second is the termination criteria.
+If the latter is larger than 1, it is read as an integer number and will be 
the number of times to clip.
+If it is smaller than 1, it is interpreted as the tolerance level to stop 
clipping. See @ref{Sigma clipping} for a complete explanation.
 
 @item --upnsigma=FLT
-The multiple of the final (@mymath{\sigma}-clipped) standard deviation (or
-@mymath{\sigma}) used to measure the upper-limit brightness or
-magnitude.
+The multiple of the final (@mymath{\sigma}-clipped) standard deviation (or 
@mymath{\sigma}) used to measure the upper-limit brightness or magnitude.
 
 @item --checkuplim=INT[,INT]
-Print a table of positions and measured values for all the full random
-distribution used for one particular object or clump. If only one integer
-is given to this option, it is interpreted to be an object's label. If two
-values are given, the first is the object label and the second is the ID of
-requested clump within it.
-
-The output is a table with three columns (its type is determined with the
-@option{--tableformat} option, see @ref{Input output options}). The first
-two columns are the position of the first pixel in each random sampling of
-this particular object/clump. The the third column is the measured flux
-over that region. If the region overlapped with a detection or masked
-pixel, then its measured value will be a NaN (not-a-number). The total
-number of rows is thus unknown, but you can be sure that the number of rows
-with non-NaN measurements is the number given to the @option{--upnum}
-option.
+Print a table of positions and measured values for all the full random 
distribution used for one particular object or clump.
+If only one integer is given to this option, it is interpreted to be an 
object's label.
+If two values are given, the first is the object label and the second is the 
ID of requested clump within it.
+
+The output is a table with three columns (its type is determined with the 
@option{--tableformat} option, see @ref{Input output options}).
+The first two columns are the position of the first pixel in each random 
sampling of this particular object/clump.
+The the third column is the measured flux over that region.
+If the region overlapped with a detection or masked pixel, then its measured 
value will be a NaN (not-a-number).
+The total number of rows is thus unknown, but you can be sure that the number 
of rows with non-NaN measurements is the number given to the @option{--upnum} 
option.
 
 @end table
 
@@ -20223,41 +14725,28 @@ option.
 @node MakeCatalog measurements, MakeCatalog output, Upper-limit settings, 
Invoking astmkcatalog
 @subsubsection MakeCatalog measurements
 
-The final group of options particular to MakeCatalog are those that specify
-which measurements/columns should be written into the final output
-table. The most basic measurement is one which only produces one final
-value for each label (for example its total brightness: a single number).
-
-In this case, all the different label's measurements can be written as one
-column in a final table/catalog that contains other columns for other
-similar single-number measurements. The majority of this section is devoted
-to MakeCatalog's single-valued measurements. However, MakeCatalog can also
-do measurements that produce more than one value for each label. Currently
-the only such measurement is generation of spectra from 3D cubes with the
-@option{--spectrum} option and it is discussed in the end of this section.
-
-Command-line options are used to identify which single-valued measurements
-you want in the final catalog(s) and in what order. If any of the options
-below is called on the command line or in any of the configuration files,
-it will be included as a column in the output catalog. The order of the
-columns is in the same order as the options were seen by MakeCatalog (see
-@ref{Configuration file precedence}). Some of the columns apply to both
-``objects'' and ``clumps'' and some are particular to only one of them (for
-the definition of ``objects'' and ``clumps'', see
-@ref{Segment}). Columns/options that are unique to one catalog (only
-objects, or only clumps), are explicitly marked with [Objects] or [Clumps]
-to specify the catalog they will be placed in.
+The final group of options particular to MakeCatalog are those that specify 
which measurements/columns should be written into the final output table.
+The current measurements in MakeCatalog are those which only produce one final 
value for each label (for example its total brightness: a single number).
+All the different label's measurements can be written as one column in a final 
table/catalog that contains other columns for other similar single-number 
measurements.
+
+In this case, all the different label's measurements can be written as one 
column in a final table/catalog that contains other columns for other similar 
single-number measurements.
+The majority of this section is devoted to MakeCatalog's single-valued 
measurements.
+However, MakeCatalog can also do measurements that produce more than one value 
for each label.
+Currently the only such measurement is generation of spectra from 3D cubes 
with the @option{--spectrum} option and it is discussed in the end of this 
section.
+
+Command-line options are used to identify which measurements you want in the 
final catalog(s) and in what order.
+If any of the options below is called on the command line or in any of the 
configuration files, it will be included as a column in the output catalog.
+The order of the columns is in the same order as the options were seen by 
MakeCatalog (see @ref{Configuration file precedence}).
+Some of the columns apply to both ``objects'' and ``clumps'' and some are 
particular to only one of them (for the definition of ``objects'' and 
``clumps'', see @ref{Segment}).
+Columns/options that are unique to one catalog (only objects, or only clumps), 
are explicitly marked with [Objects] or [Clumps] to specify the catalog they 
will be placed in.
 
 @table @option
 
 @item --i
 @itemx --ids
-This is a unique option which can add multiple columns to the final
-catalog(s). Calling this option will put the object IDs (@option{--objid})
-in the objects catalog and host-object-ID (@option{--hostobjid}) and
-ID-in-host-object (@option{--idinhostobj}) into the clumps catalog. Hence
-if only object catalogs are required, it has the same effect as
-@option{--objid}.
+This is a unique option which can add multiple columns to the final catalog(s).
+Calling this option will put the object IDs (@option{--objid}) in the objects 
catalog and host-object-ID (@option{--hostobjid}) and ID-in-host-object 
(@option{--idinhostobj}) into the clumps catalog.
+Hence if only object catalogs are required, it has the same effect as 
@option{--objid}.
 
 @item --objid
 [Objects] ID of this object.
@@ -20271,19 +14760,15 @@ if only object catalogs are required, it has the same 
effect as
 
 @item -x
 @itemx --x
-The flux weighted center of all objects and clumps along the first FITS
-axis (horizontal when viewed in SAO ds9), see @mymath{\overline{x}} in
-@ref{Measuring elliptical parameters}. The weight has to have a positive
-value (pixel value larger than the Sky value) to be meaningful! Specially
-when doing matched photometry, this might not happen: no pixel value might
-be above the Sky value. For such detections, the geometric center will be
-reported in this column (see @option{--geox}). You can use
-@option{--weightarea} to see which was used.
+The flux weighted center of all objects and clumps along the first FITS axis 
(horizontal when viewed in SAO ds9), see @mymath{\overline{x}} in 
@ref{Measuring elliptical parameters}.
+The weight has to have a positive value (pixel value larger than the Sky 
value) to be meaningful! Specially when doing matched photometry, this might 
not happen: no pixel value might be above the Sky value.
+For such detections, the geometric center will be reported in this column (see 
@option{--geox}).
+You can use @option{--weightarea} to see which was used.
 
 @item -y
 @itemx --y
-The flux weighted center of all objects and clumps along the second FITS
-axis (vertical when viewed in SAO ds9). See @option{--x}.
+The flux weighted center of all objects and clumps along the second FITS axis 
(vertical when viewed in SAO ds9).
+See @option{--x}.
 
 @item -z
 @itemx --z
@@ -20291,13 +14776,14 @@ The flux weighted center of all objects and clumps 
along the third FITS
 axis. See @option{--x}.
 
 @item --geox
-The geometric center of all objects and clumps along the first FITS axis
-axis. The geometric center is the average pixel positions irrespective of
-their pixel values.
+The geometric center of all objects and clumps along the first FITS axis axis.
+The geometric center is the average pixel positions irrespective of their 
pixel values.
 
 @item --geoy
-The geometric center of all objects and clumps along the second FITS axis
-axis, see @option{--geox}.
+The geometric center of all objects and clumps along the second FITS axis 
axis, see @option{--geox}.
+
+@item --geoz
+The geometric center of all objects and clumps along the third FITS axis axis, 
see @option{--geox}.
 
 @item --minx
 The minimum position of all objects and clumps along the first FITS axis.
@@ -20311,25 +14797,31 @@ The minimum position of all objects and clumps along 
the second FITS axis.
 @item --maxy
 The maximum position of all objects and clumps along the second FITS axis.
 
+@item --minz
+The minimum position of all objects and clumps along the third FITS axis.
+
+@item --maxz
+The maximum position of all objects and clumps along the third FITS axis.
+
 @item --clumpsx
-[Objects] The flux weighted center of all the clumps in this object along
-the first FITS axis. See @option{--x}.
+[Objects] The flux weighted center of all the clumps in this object along the 
first FITS axis.
+See @option{--x}.
 
 @item --clumpsy
-[Objects] The flux weighted center of all the clumps in this object along
-the second FITS axis. See @option{--x}.
+[Objects] The flux weighted center of all the clumps in this object along the 
second FITS axis.
+See @option{--x}.
 
 @item --clumpsz
-[Objects] The flux weighted center of all the clumps in this object along
-the third FITS axis. See @option{--x}.
+[Objects] The flux weighted center of all the clumps in this object along the 
third FITS axis.
+See @option{--x}.
 
 @item --clumpsgeox
-[Objects] The geometric center of all the clumps in this object along
-the first FITS axis. See @option{--geox}.
+[Objects] The geometric center of all the clumps in this object along the 
first FITS axis.
+See @option{--geox}.
 
 @item --clumpsgeoy
-[Objects] The geometric center of all the clumps in this object along
-the second FITS axis. See @option{--geox}.
+[Objects] The geometric center of all the clumps in this object along the 
second FITS axis.
+See @option{--geox}.
 
 @item --clumpsgeoz
 [Objects] The geometric center of all the clumps in this object along
@@ -20337,31 +14829,25 @@ the third FITS axis. See @option{--geoz}.
 
 @item -r
 @itemx --ra
-Flux weighted right ascension of all objects or clumps, see
-@option{--x}. This is just an alias for one of the lower-level
-@option{--w1} or @option{--w2} options. Using the FITS WCS keywords
-(@code{CTYPE}), MakeCatalog will determine which axis corresponds to the
-right ascension. If no @code{CTYPE} keywords start with @code{RA}, an error
-will be printed when requesting this column and MakeCatalog will abort.
+Flux weighted right ascension of all objects or clumps, see @option{--x}.
+This is just an alias for one of the lower-level @option{--w1} or 
@option{--w2} options.
+Using the FITS WCS keywords (@code{CTYPE}), MakeCatalog will determine which 
axis corresponds to the right ascension.
+If no @code{CTYPE} keywords start with @code{RA}, an error will be printed 
when requesting this column and MakeCatalog will abort.
 
 @item -d
 @itemx --dec
-Flux weighted declination of all objects or clumps, see @option{--x}. This
-is just an alias for one of the lower-level @option{--w1} or @option{--w2}
-options. Using the FITS WCS keywords (@code{CTYPE}), MakeCatalog will
-determine which axis corresponds to the declination. If no @code{CTYPE}
-keywords start with @code{DEC}, an error will be printed when requesting
-this column and MakeCatalog will abort.
+Flux weighted declination of all objects or clumps, see @option{--x}.
+This is just an alias for one of the lower-level @option{--w1} or 
@option{--w2} options.
+Using the FITS WCS keywords (@code{CTYPE}), MakeCatalog will determine which 
axis corresponds to the declination.
+If no @code{CTYPE} keywords start with @code{DEC}, an error will be printed 
when requesting this column and MakeCatalog will abort.
 
 @item --w1
-Flux weighted first WCS axis of all objects or clumps, see
-@option{--x}. The first WCS axis is commonly used as right ascension in
-images.
+Flux weighted first WCS axis of all objects or clumps, see @option{--x}.
+The first WCS axis is commonly used as right ascension in images.
 
 @item --w2
-Flux weighted second WCS axis of all objects or clumps, see
-@option{--x}. The second WCS axis is commonly used as declination in
-images.
+Flux weighted second WCS axis of all objects or clumps, see @option{--x}.
+The second WCS axis is commonly used as declination in images.
 
 @item --w3
 Flux weighted third WCS axis of all objects or clumps, see
@@ -20369,14 +14855,12 @@ Flux weighted third WCS axis of all objects or 
clumps, see
 field unit data cubes.
 
 @item --geow1
-Geometric center in first WCS axis of all objects or clumps, see
-@option{--geox}. The first WCS axis is commonly used as right ascension in
-images.
+Geometric center in first WCS axis of all objects or clumps, see 
@option{--geox}.
+The first WCS axis is commonly used as right ascension in images.
 
 @item --geow2
-Geometric center in second WCS axis of all objects or clumps, see
-@option{--geox}. The second WCS axis is commonly used as declination in
-images.
+Geometric center in second WCS axis of all objects or clumps, see 
@option{--geox}.
+The second WCS axis is commonly used as declination in images.
 
 @item --geow3
 Geometric center in third WCS axis of all objects or clumps, see
@@ -20384,29 +14868,24 @@ Geometric center in third WCS axis of all objects or 
clumps, see
 integral field unit data cubes.
 
 @item --clumpsw1
-[Objects] Flux weighted center in first WCS axis of all clumps in this
-object, see @option{--x}. The first WCS axis is commonly used as right
-ascension in images.
+[Objects] Flux weighted center in first WCS axis of all clumps in this object, 
see @option{--x}.
+The first WCS axis is commonly used as right ascension in images.
 
 @item --clumpsw2
-[Objects] Flux weighted center in third WCS axis of all clumps in this
-object, see @option{--x}. The second WCS axis is commonly used as
-declination in images.
+[Objects] Flux weighted declination of all clumps in this object, see 
@option{--x}.
+The second WCS axis is commonly used as declination in images.
 
 @item --clumpsw3
-[Objects] Flux weighted center in third WCS axis of all clumps in this
-object, see @option{--x}. The third WCS axis is commonly used as wavelength
-in integral field unit data cubes.
+[Objects] Flux weighted center in third WCS axis of all clumps in this object, 
see @option{--x}.
+The third WCS axis is commonly used as wavelength in integral field unit data 
cubes.
 
 @item --clumpsgeow1
-[Objects] Geometric center in first WCS axis of all clumps in this object,
-see @option{--geox}. The first WCS axis is commonly used as right ascension
-in images.
+[Objects] Geometric center right ascension of all clumps in this object, see 
@option{--geox}.
+The first WCS axis is commonly used as right ascension in images.
 
 @item --clumpsgeow2
-[Objects] Geometric center in second WCS axis of all clumps in this object,
-see @option{--geox}. The second WCS axis is commonly used as declination in
-images.
+[Objects] Geometric center declination of all clumps in this object, see 
@option{--geox}.
+The second WCS axis is commonly used as declination in images.
 
 @item --clumpsgeow3
 [Objects] Geometric center in third WCS axis of all clumps in this object,
@@ -20415,48 +14894,34 @@ integral field unit data cubes.
 
 @item -b
 @itemx --brightness
-The brightness (sum of all pixel values), see @ref{Flux Brightness and
-magnitude}. For clumps, the ambient brightness (flux of river pixels around
-the clump multiplied by the area of the clump) is removed, see
-@option{--riverflux}. So the sum of all the clumps brightness in the clump
-catalog will be smaller than the total clump brightness in the
-@option{--clumpbrightness} column of the objects catalog.
+The brightness (sum of all pixel values), see @ref{Flux Brightness and 
magnitude}.
+For clumps, the ambient brightness (flux of river pixels around the clump 
multiplied by the area of the clump) is removed, see @option{--riverflux}.
+So the sum of all the clumps brightness in the clump catalog will be smaller 
than the total clump brightness in the @option{--clumpbrightness} column of the 
objects catalog.
 
-If no usable pixels are present over the clump or object (for example they
-are all blank), the returned value will be NaN (note that zero is
-meaningful).
+If no usable pixels are present over the clump or object (for example they are 
all blank), the returned value will be NaN (note that zero is meaningful).
 
 @item --brightnesserr
-The (@mymath{1\sigma}) error in measuring the brightness of objects or
-clumps.
+The (@mymath{1\sigma}) error in measuring the brightness of objects or clumps.
 
 @item --clumpbrightness
-[Objects] The total brightness of the clumps within an object. This is
-simply the sum of the pixels associated with clumps in the object. If no
-usable pixels are present over the clump or object (for example they are
-all blank), the stored value will be NaN (note that zero is meaningful).
+[Objects] The total brightness of the clumps within an object.
+This is simply the sum of the pixels associated with clumps in the object.
+If no usable pixels are present over the clump or object (for example they are 
all blank), the stored value will be NaN (note that zero is meaningful).
 
 @item --brightnessnoriver
-[Clumps] The Sky (not river) subtracted clump brightness. By definition,
-for the clumps, the average brightness of the rivers surrounding it are
-subtracted from it for a first order accounting for contamination by
-neighbors. In cases where you will be calculating the flux brightness
-difference later (one example below) the contamination will be (mostly)
-removed at that stage, which is why this column was added.
+[Clumps] The Sky (not river) subtracted clump brightness.
+By definition, for the clumps, the average brightness of the rivers 
surrounding it are subtracted from it for a first order accounting for 
contamination by neighbors.
+In cases where you will be calculating the flux brightness difference later 
(one example below) the contamination will be (mostly) removed at that stage, 
which is why this column was added.
 
-If no usable pixels are present over the clump or object (for example they
-are all blank), the stored value will be NaN (note that zero is
-meaningful).
+If no usable pixels are present over the clump or object (for example they are 
all blank), the stored value will be NaN (note that zero is meaningful).
 
 @item --mean
-The mean sky subtracted value of pixels within the object or clump. For
-clumps, the average river flux is subtracted from the sky subtracted
-mean.
+The mean sky subtracted value of pixels within the object or clump.
+For clumps, the average river flux is subtracted from the sky subtracted mean.
 
 @item --median
-The median sky subtracted value of pixels within the object or clump. For
-clumps, the average river flux is subtracted from the sky subtracted
-median.
+The median sky subtracted value of pixels within the object or clump.
+For clumps, the average river flux is subtracted from the sky subtracted 
median.
 
 @item -m
 @itemx --magnitude
@@ -20464,79 +14929,56 @@ The magnitude of clumps or objects, see 
@option{--brightness}.
 
 @item -e
 @itemx --magnitudeerr
-The magnitude error of clumps or objects. The magnitude error is calculated
-from the signal-to-noise ratio (see @option{--sn} and @ref{Quantifying
-measurement limits}). Note that until now this error assumes uncorrelated
-pixel values and also does not include the error in estimating the aperture
-(or error in generating the labeled image).
+The magnitude error of clumps or objects.
+The magnitude error is calculated from the signal-to-noise ratio (see 
@option{--sn} and @ref{Quantifying measurement limits}).
+Note that until now this error assumes uncorrelated pixel values and also does 
not include the error in estimating the aperture (or error in generating the 
labeled image).
 
 For now these factors have to be found by other means.
-@url{https://savannah.gnu.org/task/index.php?14124, Task 14124} has been
-defined for work on adding these sources of error too.
+@url{https://savannah.gnu.org/task/index.php?14124, Task 14124} has been 
defined for work on adding these sources of error too.
 
 @item --clumpsmagnitude
-[Objects] The magnitude of all clumps in this object, see
-@option{--clumpbrightness}.
+[Objects] The magnitude of all clumps in this object, see 
@option{--clumpbrightness}.
 
 @item --upperlimit
-The upper limit value (in units of the input image) for this object or
-clump. See @ref{Quantifying measurement limits} and @ref{Upper-limit
-settings} for a complete explanation. This is very important for the
-fainter and smaller objects in the image where the measured magnitudes are
-not reliable.
+The upper limit value (in units of the input image) for this object or clump.
+See @ref{Quantifying measurement limits} and @ref{Upper-limit settings} for a 
complete explanation.
+This is very important for the fainter and smaller objects in the image where 
the measured magnitudes are not reliable.
 
 @item --upperlimitmag
-The upper limit magnitude for this object or clump. See @ref{Quantifying
-measurement limits} and @ref{Upper-limit settings} for a complete
-explanation. This is very important for the fainter and smaller objects in
-the image where the measured magnitudes are not reliable.
+The upper limit magnitude for this object or clump.
+See @ref{Quantifying measurement limits} and @ref{Upper-limit settings} for a 
complete explanation.
+This is very important for the fainter and smaller objects in the image where 
the measured magnitudes are not reliable.
 
 @item --upperlimitonesigma
-The @mymath{1\sigma} upper limit value (in units of the input image) for
-this object or clump. See @ref{Quantifying measurement limits} and
-@ref{Upper-limit settings} for a complete explanation. When
-@option{--upnsigma=1}, this column's values will be the same as
-@option{--upperlimit}.
+The @mymath{1\sigma} upper limit value (in units of the input image) for this 
object or clump.
+See @ref{Quantifying measurement limits} and @ref{Upper-limit settings} for a 
complete explanation.
+When @option{--upnsigma=1}, this column's values will be the same as 
@option{--upperlimit}.
 
 @item --upperlimitsigma
-The position of the total brightness measured within the distribution of
-randomly placed upperlimit measurements in units of the distribution's
-@mymath{\sigma} or standard deviation. See @ref{Quantifying measurement
-limits} and @ref{Upper-limit settings} for a complete explanation.
+The position of the total brightness measured within the distribution of 
randomly placed upperlimit measurements in units of the distribution's 
@mymath{\sigma} or standard deviation.
+See @ref{Quantifying measurement limits} and @ref{Upper-limit settings} for a 
complete explanation.
 
 @item --upperlimitquantile
-The position of the total brightness measured within the distribution of
-randomly placed upperlimit measurements as a quantile (value between 0 or
-1). See @ref{Quantifying measurement limits} and @ref{Upper-limit settings}
-for a complete explanation. If the object is brighter than the brightest
-randomly placed profile, a value of @code{inf} is returned. If it is less
-than the minimum, a value of @code{-inf} is reported.
+The position of the total brightness measured within the distribution of 
randomly placed upperlimit measurements as a quantile (value between 0 or 1).
+See @ref{Quantifying measurement limits} and @ref{Upper-limit settings} for a 
complete explanation.
+If the object is brighter than the brightest randomly placed profile, a value 
of @code{inf} is returned.
+If it is less than the minimum, a value of @code{-inf} is reported.
 
 @item --upperlimitskew
 @cindex Skewness
-This column contains the non-parametric skew of the @mymath{\sigma}-clipped
-random distribution that was used to estimate the upper-limit
-magnitude. Taking @mymath{\mu} as the mean, @mymath{\nu} as the median and
-@mymath{\sigma} as the standard deviation, the traditional definition of
-skewness is defined as: @mymath{(\mu-\nu)/\sigma}.
-
-This can be a good measure to see how much you can trust the random
-measurements, or in other words, how accurately the regions with signal
-have been masked/detected. If the skewness is strong (and to the positive),
-then you can tell that you have a lot of undetected signal in the dataset,
-and therefore that the upper-limit measurement (and other measurements) are
-not reliable.
+This column contains the non-parametric skew of the @mymath{\sigma}-clipped 
random distribution that was used to estimate the upper-limit magnitude.
+Taking @mymath{\mu} as the mean, @mymath{\nu} as the median and 
@mymath{\sigma} as the standard deviation, the traditional definition of 
skewness is defined as: @mymath{(\mu-\nu)/\sigma}.
+
+This can be a good measure to see how much you can trust the random 
measurements, or in other words, how accurately the regions with signal have 
been masked/detected. If the skewness is strong (and to the positive), then you 
can tell that you have a lot of undetected signal in the dataset, and therefore 
that the upper-limit measurement (and other measurements) are not reliable.
 
 @item --riverave
-[Clumps] The average brightness of the river pixels around this
-clump. River pixels were defined in Akhlaghi and Ichikawa 2015. In
-short they are the pixels immediately outside of the clumps. This
-value is used internally to find the brightness (or magnitude) and
-signal to noise ratio of the clumps. It can generally also be used as
-a scale to gauge the base (ambient) flux surrounding the clump. In
-case there was no river pixels, then this column will have the value
-of the Sky under the clump. So note that this value is @emph{not} sky
-subtracted.
+[Clumps] The average brightness of the river pixels around this clump.
+River pixels were defined in Akhlaghi and Ichikawa 2015.
+In short they are the pixels immediately outside of the clumps.
+This value is used internally to find the brightness (or magnitude) and signal 
to noise ratio of the clumps.
+It can generally also be used as a scale to gauge the base (ambient) flux 
surrounding the clump.
+In case there was no river pixels, then this column will have the value of the 
Sky under the clump.
+So note that this value is @emph{not} sky subtracted.
 
 @item --rivernum
 [Clumps] The number of river pixels around this clump, see
@@ -20544,18 +14986,16 @@ subtracted.
 
 @item -n
 @itemx --sn
-The Signal to noise ratio (S/N) of all clumps or objects. See Akhlaghi
-and Ichikawa (2015) for the exact equations used.
+The Signal to noise ratio (S/N) of all clumps or objects.
+See Akhlaghi and Ichikawa (2015) for the exact equations used.
 
 @item --sky
-The sky flux (per pixel) value under this object or clump. This is
-actually the mean value of all the pixels in the sky image that lie on
-the same position as the object or clump.
+The sky flux (per pixel) value under this object or clump.
+This is actually the mean value of all the pixels in the sky image that lie on 
the same position as the object or clump.
 
 @item --std
-The sky value standard deviation (per pixel) for this clump or object. This
-is the square root of the mean variance under the object, or the root mean
-square.
+The sky value standard deviation (per pixel) for this clump or object.
+This is the square root of the mean variance under the object, or the root 
mean square.
 
 @item -C
 @itemx --numclumps
@@ -20563,8 +15003,7 @@ square.
 
 @item -a
 @itemx --area
-The area (number of pixels) in any clump or object independent of what
-pixel it lies over (if it is NaN/blank or unused for example).
+The raw area (number of pixels/voxels) in any clump or object independent of 
what pixel it lies over (if it is NaN/blank or unused for example).
 
 @item --areaxy
 @cindex IFU: Integral Field Unit
@@ -20578,14 +15017,12 @@ projection onto the first two dimensions would be a 
narrow-band image.
 [Objects] The total area of all the clumps in this object.
 
 @item --weightarea
-The area (number of pixels) used in the flux weighted position
-calculations.
+The area (number of pixels) used in the flux weighted position calculations.
 
 @item --geoarea
-The area of all the pixels labeled with an object or clump. Note that
-unlike @option{--area}, pixel values are completely ignored in this
-column. For example, if a pixel value is blank, it won't be counted in
-@option{--area}, but will be counted here.
+The area of all the pixels labeled with an object or clump.
+Note that unlike @option{--area}, pixel values are completely ignored in this 
column.
+For example, if a pixel value is blank, it won't be counted in 
@option{--area}, but will be counted here.
 
 @item --geoareaxy
 Similar to @option{--geoarea}, when the clump or object is projected onto
@@ -20595,42 +15032,35 @@ projection onto the first two dimensions would be a 
narrow-band image.
 
 @item -A
 @itemx --semimajor
-The pixel-value weighted root mean square (RMS) along the semi-major axis
-of the profile (assuming it is an ellipse) in units of pixels. See
-@ref{Measuring elliptical parameters}.
+The pixel-value weighted root mean square (RMS) along the semi-major axis of 
the profile (assuming it is an ellipse) in units of pixels.
+See @ref{Measuring elliptical parameters}.
 
 @item -B
 @itemx --semiminor
-The pixel-value weighted root mean square (RMS) along the semi-minor axis
-of the profile (assuming it is an ellipse) in units of pixels. See
-@ref{Measuring elliptical parameters}.
+The pixel-value weighted root mean square (RMS) along the semi-minor axis of 
the profile (assuming it is an ellipse) in units of pixels.
+See @ref{Measuring elliptical parameters}.
 
 @item --axisratio
-The pixel-value weighted axis ratio (semi-minor/semi-major) of the object
-or clump.
+The pixel-value weighted axis ratio (semi-minor/semi-major) of the object or 
clump.
 
 @item -p
 @itemx --positionangle
 The pixel-value weighted angle of the semi-major axis with the first FITS
-axis in degrees. See @ref{Measuring elliptical parameters}.
+axis in degrees.
+See @ref{Measuring elliptical parameters}.
 
 @item --geosemimajor
-The geometric (ignoring pixel values) root mean square (RMS) along the
-semi-major axis of the profile, assuming it is an ellipse, in units of
-pixels.
+The geometric (ignoring pixel values) root mean square (RMS) along the 
semi-major axis of the profile, assuming it is an ellipse, in units of pixels.
 
 @item --geosemiminor
-The geometric (ignoring pixel values) root mean square (RMS) along the
-semi-minor axis of the profile, assuming it is an ellipse, in units of
-pixels.
+The geometric (ignoring pixel values) root mean square (RMS) along the 
semi-minor axis of the profile, assuming it is an ellipse, in units of pixels.
 
 @item --geoaxisratio
 The geometric (ignoring pixel values) axis ratio of the profile, assuming
 it is an ellipse.
 
 @item --geopositionangle
-The geometric (ignoring pixel values) angle of the semi-major axis
-with the first FITS axis in degrees.
+The geometric (ignoring pixel values) angle of the semi-major axis with the 
first FITS axis in degrees.
 
 @end table
 
@@ -20676,94 +15106,56 @@ the same physical object.
 
 @node MakeCatalog output,  , MakeCatalog measurements, Invoking astmkcatalog
 @subsubsection MakeCatalog output
+After it has completed all the requested measurements (see @ref{MakeCatalog 
measurements}), MakeCatalog will store its measurements in table(s).
+If an output filename is given (see @option{--output} in @ref{Input output 
options}), the format of the table will be deduced from the name.
+When it isn't given, the input name will be appended with a @file{_cat} suffix 
(see @ref{Automatic output}) and its format will be determined from the 
@option{--tableformat} option, which is also discussed in @ref{Input output 
options}.
+@option{--tableformat} is also necessary when the requested output name is a 
FITS table (recall that FITS can accept ASCII and binary tables, see 
@ref{Table}).
+
+By default (when @option{--spectrum} isn't called) only a single catalog/table 
will be created for ``objects'', however, if @option{--clumpscat} is called, a 
secondary catalog/table will also be created.
+For more on ``objects'' and ``clumps'', see @ref{Segment}.
+In short, if you only have one set of labeled images, you don't have to worry 
about clumps (they are deactivated by default).
 
-After it has completed all the requested measurements (see @ref{MakeCatalog
-measurements}), MakeCatalog will store its measurements in table(s). If an
-output filename is given (see @option{--output} in @ref{Input output
-options}), the format of the table will be deduced from the name. When it
-isn't given, the input name will be appended with a @file{_cat} suffix (see
-@ref{Automatic output}) and its format will be determined from the
-@option{--tableformat} option, which is also discussed in @ref{Input output
-options}. @option{--tableformat} is also necessary when the requested
-output name is a FITS table (recall that FITS can accept ASCII and binary
-tables, see @ref{Table}).
-
-By default (when @option{--spectrum} isn't called) only a single
-catalog/table will be created for ``objects'', however, if
-@option{--clumpscat} is called, a secondary catalog/table will also be
-created. For more on ``objects'' and ``clumps'', see @ref{Segment}. In
-short, if you only have one set of labeled images, you don't have to worry
-about clumps (they are deactivated by default).
-
-When @option{--spectrum} is called, it is not mandatory to specify any
-single-valued measurement columns. In this case, the output will only be
-the spectra of each labeled region. See the description of
-@option{--spectrum} in @ref{MakeCatalog measurements}.
+When @option{--spectrum} is called, it is not mandatory to specify any 
single-valued measurement columns. In this case, the output will only be the 
spectra of each labeled region. See the description of @option{--spectrum} in 
@ref{MakeCatalog measurements}.
 
 The full list of MakeCatalog's output options are elaborated below.
 
 @table @option
 @item -C
 @itemx --clumpscat
-Do measurements on clumps and produce a second catalog (only devoted to
-clumps). When this option is given, MakeCatalog will also look for a
-secondary labeled dataset (identifying substructure) and produce a catalog
-from that. For more on the definition on ``clumps'', see @ref{Segment}.
-
-When the output is a FITS file, the objects and clumps catalogs/tables will
-be stored as multiple extensions of one FITS file. You can use @ref{Table}
-to inspect the column meta-data and contents in this case. However, in
-plain text format (see @ref{Gnuastro text table format}), it is only
-possible to keep one table per file. Therefore, if the output is a text
-file, two output files will be created, ending in @file{_o.txt} (for
-objects) and @file{_c.txt} (for clumps).
+Do measurements on clumps and produce a second catalog (only devoted to 
clumps).
+When this option is given, MakeCatalog will also look for a secondary labeled 
dataset (identifying substructure) and produce a catalog from that.
+For more on the definition on ``clumps'', see @ref{Segment}.
+
+When the output is a FITS file, the objects and clumps catalogs/tables will be 
stored as multiple extensions of one FITS file.
+You can use @ref{Table} to inspect the column meta-data and contents in this 
case.
+However, in plain text format (see @ref{Gnuastro text table format}), it is 
only possible to keep one table per file.
+Therefore, if the output is a text file, two output files will be created, 
ending in @file{_o.txt} (for objects) and @file{_c.txt} (for clumps).
 
 @item --noclumpsort
-Don't sort the clumps catalog based on object ID (only relevant with
-@option{--clumpscat}). This option will benefit the
-performance@footnote{The performance boost due to @option{--noclumpsort}
-can only be felt when there are a huge number of objects. Therefore, by
-default the output is sorted to avoid miss-understandings or bugs in the
-user's scripts when the user forgets to sort the outputs.} of MakeCatalog
-when it is run on multiple threads @emph{and} the position of the rows in
-the clumps catalog is irrelevant (for example you just want the
-number-counts).
-
-MakeCatalog does all its measurements on each @emph{object} independently
-and in parallel. As a result, while it is writing the measurements on each
-object's clumps, it doesn't know how many clumps there were in previous
-objects. Each thread will just fetch the first available row and write the
-information of clumps (in order) starting from that row. After all the
-measurements are done, by default (when this option isn't called),
-MakeCatalog will reorder/permute the clumps catalog to have both the object
-and clump ID in an ascending order.
-
-If you would like to order the catalog later (when its a plain text file),
-you can run the following command to sort the rows by object ID (and clump
-ID within each object), assuming they are respectively the first and second
-columns:
+Don't sort the clumps catalog based on object ID (only relevant with 
@option{--clumpscat}).
+This option will benefit the performance@footnote{The performance boost due to 
@option{--noclumpsort} can only be felt when there are a huge number of objects.
+Therefore, by default the output is sorted to avoid miss-understandings or 
bugs in the user's scripts when the user forgets to sort the outputs.} of 
MakeCatalog when it is run on multiple threads @emph{and} the position of the 
rows in the clumps catalog is irrelevant (for example you just want the 
number-counts).
+
+MakeCatalog does all its measurements on each @emph{object} independently and 
in parallel.
+As a result, while it is writing the measurements on each object's clumps, it 
doesn't know how many clumps there were in previous objects.
+Each thread will just fetch the first available row and write the information 
of clumps (in order) starting from that row.
+After all the measurements are done, by default (when this option isn't 
called), MakeCatalog will reorder/permute the clumps catalog to have both the 
object and clump ID in an ascending order.
+
+If you would like to order the catalog later (when its a plain text file), you 
can run the following command to sort the rows by object ID (and clump ID 
within each object), assuming they are respectively the first and second 
columns:
 
 @example
 $ awk '!/^#/' out_c.txt | sort -g -k1,1 -k2,2
 @end example
 
 @item --sfmagnsigma=FLT
-The median standard deviation (from a @command{MEDSTD} keyword in the Sky
-standard deviation image) will be multiplied by the value to this option
-and its magnitude will be reported in the comments of the output
-catalog. This value is a per-pixel value, not per object/clump and is not
-found over an area or aperture, like the common @mymath{5\sigma} values
-that are commonly reported as a measure of depth or the upper-limit
-measurements (see @ref{Quantifying measurement limits}).
+The median standard deviation (from a @command{MEDSTD} keyword in the Sky 
standard deviation image) will be multiplied by the value to this option and 
its magnitude will be reported in the comments of the output catalog.
+This value is a per-pixel value, not per object/clump and is not found over an 
area or aperture, like the common @mymath{5\sigma} values that are commonly 
reported as a measure of depth or the upper-limit measurements (see 
@ref{Quantifying measurement limits}).
 
 @item --sfmagarea=FLT
-Area (in arcseconds squared) to convert the per-pixel estimation of
-@option{--sfmagnsigma} in the comments section of the output tables. Note
-that this is just a unit conversion using the World Coordinate System (WCS)
-information in the input's header. It does not actually do any measurements
-on this area. For random measurements on any area, please use the
-upper-limit columns of MakeCatalog (see the discussion on upper-limit
-measurements in @ref{Quantifying measurement limits}).
+Area (in arcseconds squared) to convert the per-pixel estimation of 
@option{--sfmagnsigma} in the comments section of the output tables.
+Note that this is just a unit conversion using the World Coordinate System 
(WCS) information in the input's header.
+It does not actually do any measurements on this area.
+For random measurements on any area, please use the upper-limit columns of 
MakeCatalog (see the discussion on upper-limit measurements in @ref{Quantifying 
measurement limits}).
 @end table
 
 
@@ -20772,17 +15164,13 @@ measurements in @ref{Quantifying measurement limits}).
 @node Match,  , MakeCatalog, Data analysis
 @section Match
 
-Data can come come from different telescopes, filters, software and even
-different configurations for a single software. As a result, one of the
-primary things to do after generating catalogs from each of these sources
-(for example with @ref{MakeCatalog}), is to find which sources in one
-catalog correspond to which in the other(s). In other words, to `match' the
-two catalogs with each other.
+Data can come come from different telescopes, filters, software and even 
different configurations for a single software.
+As a result, one of the primary things to do after generating catalogs from 
each of these sources (for example with @ref{MakeCatalog}), is to find which 
sources in one catalog correspond to which in the other(s).
+In other words, to `match' the two catalogs with each other.
 
-Gnuastro's Match program is in charge of such operations. The nearest
-objects in the two catalogs, within the given aperture, will be found and
-given as output. The aperture can be a circle or an ellipse with any
-orientation.
+Gnuastro's Match program is in charge of such operations.
+The nearest objects in the two catalogs, within the given aperture, will be 
found and given as output.
+The aperture can be a circle or an ellipse with any orientation.
 
 @menu
 * Invoking astmatch::           Inputs, outputs and options of Match
@@ -20791,9 +15179,8 @@ orientation.
 @node Invoking astmatch,  , Match, Match
 @subsection Invoking Match
 
-When given two catalogs, Match finds the rows that are nearest to each
-other within an input aperture. The executable name is @file{astmatch} with
-the following general template
+When given two catalogs, Match finds the rows that are nearest to each other 
within an input aperture.
+The executable name is @file{astmatch} with the following general template
 
 @example
 $ astmatch [OPTION ...] input-1 input-2
@@ -20834,171 +15221,122 @@ $ astmatch --ccol1=2,3,4 --ccol2=2,3,4 
-a0.5/3600,0.5/3600,5e-10 \
            in1.fits in2.txt
 @end example
 
-Match will find the rows that are nearest to each other in two catalogs
-(given some coordinate columns). Therefore two catalogs are necessary for
-input. However, they don't necessarily have to be files: 1) the first
-catalog can also come from the standard input (for example a pipe, see
-@ref{Standard input}); 2) when only one point is needed, you can use the
-@option{--coord} option to avoid creating a file for the second
-catalog. When the inputs are files, they can be plain text tables or FITS
-tables, for more see @ref{Tables}.
-
-Match follows the same basic behavior of all Gnuastro programs as fully
-described in @ref{Common program behavior}. If the first input is a FITS
-file, the common @option{--hdu} option (see @ref{Input output options})
-should be used to identify the extension. When the second input is FITS,
-the extension must be specified with @option{--hdu2}.
-
-When @option{--quiet} is not called, Match will print the number of matches
-found in standard output (on the command-line). When matches are found, by
-default, the output file(s) will be the re-arranged input tables such that
-the rows match each other: both output tables will have the same number of
-rows which are matched with each other. If @option{--outcols} is called,
-the output is a single table with rows chosen from either of the two inputs
-in any order. If the @option{--logasoutput} option is called, the output
-will be a single table with the contents of the log file, see below. If no
-matches are found, the columns of the output table(s) will have zero rows
-(with proper meta-data).
-
-If no output file name is given with the @option{--output} option, then
-automatic output @ref{Automatic output} will be used to determine the
-output name(s). Depending on @option{--tableformat} (see @ref{Input output
-options}), the output will then be a (possibly multi-extension) FITS file
-or (possibly two) plain text file(s). When the output is a FITS file, the
-default re-arranged inputs will be two extensions of the output FITS
-file. With @option{--outcols} and @option{--logasoutput}, the FITS output
-will be a single table (in one extension).
-
-When the @option{--log} option is called (see @ref{Operating mode
-options}), and there was a match, Match will also create a file named
-@file{astmatch.fits} (or @file{astmatch.txt}, depending on
-@option{--tableformat}, see @ref{Input output options}) in the directory it
-is run in. This log table will have three columns. The first and second
-columns show the matching row/record number (counting from 1) of the first
-and second input catalogs respectively. The third column is the distance
-between the two matched positions. The units of the distance are the same
-as the given coordinates (given the possible ellipticity, see description
-of @option{--aperture} below). When @option{--logasoutput} is called, no
-log file (with a fixed name) will be created. In this case, the output file
-(possibly given by the @option{--output} option) will have the contents of
-this log file.
+Match will find the rows that are nearest to each other in two catalogs (given 
some coordinate columns).
+Therefore two catalogs are necessary for input.
+However, they don't necessarily have to be files:
+1) the first catalog can also come from the standard input (for example a 
pipe, see @ref{Standard input});
+2) when only one point is needed, you can use the @option{--coord} option to 
avoid creating a file for the second catalog.
+When the inputs are files, they can be plain text tables or FITS tables, for 
more see @ref{Tables}.
+
+Match follows the same basic behavior of all Gnuastro programs as fully 
described in @ref{Common program behavior}.
+If the first input is a FITS file, the common @option{--hdu} option (see 
@ref{Input output options}) should be used to identify the extension.
+When the second input is FITS, the extension must be specified with 
@option{--hdu2}.
+
+When @option{--quiet} is not called, Match will print the number of matches 
found in standard output (on the command-line).
+When matches are found, by default, the output file(s) will be the re-arranged 
input tables such that the rows match each other: both output tables will have 
the same number of rows which are matched with each other.
+If @option{--outcols} is called, the output is a single table with rows chosen 
from either of the two inputs in any order.
+If the @option{--logasoutput} option is called, the output will be a single 
table with the contents of the log file, see below.
+If no matches are found, the columns of the output table(s) will have zero 
rows (with proper meta-data).
+
+If no output file name is given with the @option{--output} option, then 
automatic output @ref{Automatic output} will be used to determine the output 
name(s).
+Depending on @option{--tableformat} (see @ref{Input output options}), the 
output will then be a (possibly multi-extension) FITS file or (possibly two) 
plain text file(s).
+When the output is a FITS file, the default re-arranged inputs will be two 
extensions of the output FITS file.
+With @option{--outcols} and @option{--logasoutput}, the FITS output will be a 
single table (in one extension).
+
+When the @option{--log} option is called (see @ref{Operating mode options}), 
and there was a match, Match will also create a file named @file{astmatch.fits} 
(or @file{astmatch.txt}, depending on @option{--tableformat}, see @ref{Input 
output options}) in the directory it is run in.
+This log table will have three columns.
+The first and second columns show the matching row/record number (counting 
from 1) of the first and second input catalogs respectively.
+The third column is the distance between the two matched positions.
+The units of the distance are the same as the given coordinates (given the 
possible ellipticity, see description of @option{--aperture} below).
+When @option{--logasoutput} is called, no log file (with a fixed name) will be 
created.
+In this case, the output file (possibly given by the @option{--output} option) 
will have the contents of this log file.
 
 @cartouche
 @noindent
-@strong{@option{--log} isn't thread-safe}: As described above, when
-@option{--logasoutput} is not called, the Log file has a fixed name for all
-calls to Match. Therefore if a separate log is requested in two
-simultaneous calls to Match in the same directory, Match will try to write
-to the same file. This will cause problems like unreasonable log file,
-undefined behavior, or a crash.
+@strong{@option{--log} isn't thread-safe}: As described above, when 
@option{--logasoutput} is not called, the Log file has a fixed name for all 
calls to Match.
+Therefore if a separate log is requested in two simultaneous calls to Match in 
the same directory, Match will try to write to the same file.
+This will cause problems like unreasonable log file, undefined behavior, or a 
crash.
 @end cartouche
 
 @table @option
 @item -H STR
 @itemx --hdu2=STR
-The extension/HDU of the second input if it is a FITS file. When it isn't a
-FITS file, this option's value is ignored. For the first input, the common
-option @option{--hdu} must be used.
+The extension/HDU of the second input if it is a FITS file.
+When it isn't a FITS file, this option's value is ignored.
+For the first input, the common option @option{--hdu} must be used.
 
 @item --outcols=STR
-Columns (from both inputs) to write into a single matched table output. The
-value to @code{--outcols} must be a comma-separated list of strings. The
-first character of each string specifies the input catalog: @option{a} for
-the first and @option{b} for the second. The rest of the characters of the
-string will be directly used to identify the proper column(s) in the
-respective table. See @ref{Selecting table columns} for how columns can be
-specified in Gnuastro.
+Columns (from both inputs) to write into a single matched table output.
+The value to @code{--outcols} must be a comma-separated list of strings.
+The first character of each string specifies the input catalog: @option{a} for 
the first and @option{b} for the second.
+The rest of the characters of the string will be directly used to identify the 
proper column(s) in the respective table.
+See @ref{Selecting table columns} for how columns can be specified in Gnuastro.
 
-For example the output of @option{--outcols=a1,bRA,bDEC} will have three
-columns: the first column of the first input, along with the @option{RA}
-and @option{DEC} columns of the second input.
+For example the output of @option{--outcols=a1,bRA,bDEC} will have three 
columns: the first column of the first input, along with the @option{RA} and 
@option{DEC} columns of the second input.
 
-If the string after @option{a} or @option{b} is @option{_all}, then all the
-columns of the respective input file will be written in the output. For
-example the command below will print all the input columns from the first
-catalog along with the 5th column from the second:
+If the string after @option{a} or @option{b} is @option{_all}, then all the 
columns of the respective input file will be written in the output.
+For example the command below will print all the input columns from the first 
catalog along with the 5th column from the second:
 
 @example
 $ astmatch a.fits b.fits --outcols=a_all,b5
 @end example
 
-@code{_all} can be used multiple times, possibly on both inputs. Tip: if an
-input's column is called @code{_all} (an unlikely name!) and you don't want
-all the columns from that table the output, use its column number to avoid
-confusion.
+@code{_all} can be used multiple times, possibly on both inputs.
+Tip: if an input's column is called @code{_all} (an unlikely name!) and you 
don't want all the columns from that table the output, use its column number to 
avoid confusion.
 
-Another example is given in the one-line examples above. Compared to the
-default case (where two tables with all their columns) are saved
-separately, using this option is much faster: it will only read and
-re-arrange the necessary columns and it will write a single output
-table. Combined with regular expressions in large tables, this can be a
-very powerful and convenient way to merge various tables into one.
+Another example is given in the one-line examples above.
+Compared to the default case (where two tables with all their columns) are 
saved separately, using this option is much faster: it will only read and 
re-arrange the necessary columns and it will write a single output table.
+Combined with regular expressions in large tables, this can be a very powerful 
and convenient way to merge various tables into one.
 
-When @option{--coord} is given, no second catalog will be read. The second
-catalog will be created internally based on the values given to
-@option{--coord}. So column names aren't defined and you can only request
-integer column numbers that are less than the number of coordinates given
-to @option{--coord}. For example if you want to find the row matching RA of
-1.2345 and Dec of 6.7890, then you should use
-@option{--coord=1.2345,6.7890}. But when using @option{--outcols}, you
-can't give @code{bRA}, or @code{b25}.
+When @option{--coord} is given, no second catalog will be read.
+The second catalog will be created internally based on the values given to 
@option{--coord}.
+So column names aren't defined and you can only request integer column numbers 
that are less than the number of coordinates given to @option{--coord}.
+For example if you want to find the row matching RA of 1.2345 and Dec of 
6.7890, then you should use @option{--coord=1.2345,6.7890}.
+But when using @option{--outcols}, you can't give @code{bRA}, or @code{b25}.
 
 @item -l
 @itemx --logasoutput
-The output file will have the contents of the log file: indexes in the two
-catalogs that match with each other along with their distance. See
-description above. When this option is called, a log file called
-@file{astmatch.txt} will not be created. With this option, the default
-output behavior (two tables containing the re-arranged inputs) will be
+The output file will have the contents of the log file: indexes in the two 
catalogs that match with each other along with their distance.
+See description above.
+When this option is called, a log file called @file{astmatch.txt} will not be 
created.
+With this option, the default output behavior (two tables containing the 
re-arranged inputs) will be
 
 @item --notmatched
-Write the non-matching rows into the outputs, not the matched ones. Note
-that with this option, the two output tables will not necessarily have the
-same number of rows. Therefore, this option cannot be called with
-@option{--outcols}. @option{--outcols} prints mixed columns from both
-inputs, so they must all have the same number of elements and must
-correspond to each other.
+Write the non-matching rows into the outputs, not the matched ones.
+Note that with this option, the two output tables will not necessarily have 
the same number of rows.
+Therefore, this option cannot be called with @option{--outcols}.
+@option{--outcols} prints mixed columns from both inputs, so they must all 
have the same number of elements and must correspond to each other.
 
 @item -c INT/STR[,INT/STR]
 @itemx --ccol1=INT/STR[,INT/STR]
-The coordinate columns of the first input. The number of dimensions for the
-match is determined by the number of comma-separated values given to this
-option. The values can be the column number (counting from 1), exact column
-name or a regular expression. For more, see @ref{Selecting table
-columns}. See the one-line examples above for some usages of this option.
+The coordinate columns of the first input.
+The number of dimensions for the match is determined by the number of 
comma-separated values given to this option.
+The values can be the column number (counting from 1), exact column name or a 
regular expression.
+For more, see @ref{Selecting table columns}.
+See the one-line examples above for some usages of this option.
 
 @item -C INT/STR[,INT/STR]
 @itemx --ccol2=INT/STR[,INT/STR]
-The coordinate columns of the second input. See the example in
-@option{--ccol1} for more.
+The coordinate columns of the second input.
+See the example in @option{--ccol1} for more.
 
 @item -d FLT[,FLT]
 @itemx --coord=FLT[,FLT]
-Manually specify the coordinates to match against the given catalog. With
-this option, Match will not look for a second input file/table and will
-directly use the coordinates given to this option.
-
-When this option is called, the output changes in the following ways: 1)
-when @option{--outcols} is specified, for the second input, it can only
-accept integer numbers that are less than the number of values given to
-this option, see description of that option for more. 2) By default (when
-@option{--outcols} isn't used), only the matching row of the first table
-will be output (a single file), not two separate files (one for each
-table).
-
-This option is good when you have a (large) catalog and only want to match
-a single coordinate to it (for example to find the nearest catalog entry to
-your desired point). With this option, you can write the coordinates on the
-command-line and thus avoid the need to make a single-row file.
+Manually specify the coordinates to match against the given catalog.
+With this option, Match will not look for a second input file/table and will 
directly use the coordinates given to this option.
+
+When this option is called, the output changes in the following ways:
+1) when @option{--outcols} is specified, for the second input, it can only 
accept integer numbers that are less than the number of values given to this 
option, see description of that option for more.
+2) By default (when @option{--outcols} isn't used), only the matching row of 
the first table will be output (a single file), not two separate files (one for 
each table).
+
+This option is good when you have a (large) catalog and only want to match a 
single coordinate to it (for example to find the nearest catalog entry to your 
desired point).
+With this option, you can write the coordinates on the command-line and thus 
avoid the need to make a single-row file.
 
 @item -a FLT[,FLT[,FLT]]
 @itemx --aperture=FLT[,FLT[,FLT]]
-Parameters of the aperture for matching. The values given to this option
-can be fractions, for example when the position columns are in units of
-degrees, @option{1/3600} can be used to ask for one arcsecond. The
-interpretation of the values depends on the requested dimensions
-(determined from @option{--ccol1} and @code{--ccol2}) and how many values
-are given to this option.
+Parameters of the aperture for matching.
+The values given to this option can be fractions, for example when the 
position columns are in units of degrees, @option{1/3600} can be used to ask 
for one arcsecond.
+The interpretation of the values depends on the requested dimensions 
(determined from @option{--ccol1} and @code{--ccol2}) and how many values are 
given to this option.
 
 When multiple objects are found within the aperture, the match is defined
 as the nearest one. In a multi-dimensional dataset, when the aperture is a
@@ -21008,62 +15346,58 @@ of this distance, see @mymath{r_{el}} in 
@ref{Defining an ellipse and
 ellipsoid}.
 
 @table @asis
-@item 1D
-The aperture/interval can only take one value: half of the interval around
-each point (maximum distance from each point).
+@item 1D match
+The aperture/interval can only take one value: half of the interval around 
each point (maximum distance from each point).
 
-@item 2D
-The aperture can be a circle, an ellipse aligned in the axes or an ellipse
-with a rotated major axis. To simplify the usage, the shape can be
-determined based on the number of values given to this option.
+@item 2D match
+In a 2D match, the aperture can be a circle, an ellipse aligned in the axes or 
an ellipse with a rotated major axis.
+To simply the usage, you can determine the shape based on the number of free 
parameters for each.
 
 @table @asis
 @item 1 number
-For example @option{--aperture=2}. The aperture will be a circle of the
-given radius. The value will be in the same units as the columns in
-@option{--ccol1} and @option{--ccol2}).
+For example @option{--aperture=2}.
+The aperture will be a circle of the given radius.
+The value will be in the same units as the columns in @option{--ccol1} and 
@option{--ccol2}).
 
 @item 2 numbers
-For example @option{--aperture=3,4e-10}. The aperture will be a general
-ellipse with the respective value along each dimension. The numbers are in
-units of the first and second axis. In the example above, the semi-axis
-value along the first axis will be 3 (in units of the first coordinate) and
-along the second axis will be @mymath{4\times10^{-10}} (in units of the
-second coordinate). Such values can happen if you are comparing catalogs of
-a spectra for example.
+For example @option{--aperture=3,4e-10}.
+The aperture will be an ellipse (if the two numbers are different) with the 
respective value along each dimension.
+The numbers are in units of the first and second axis.
+In the example above, the semi-axis value along the first axis will be 3 (in 
units of the first coordinate) and along the second axis will be 
@mymath{4\times10^{-10}} (in units of the second coordinate).
+Such values can happen if you are comparing catalogs of a spectra for example.
+If more than one object exists in the aperture, the nearest will be found 
along the major axis as described in @ref{Defining an ellipse and ellipsoid}.
 
 @item 3 numbers
-For example @option{--aperture=2,0.6,30}. The aperture will be an ellipse
-(if the second value is not 1). The first number is the semi-major axis,
-the second is the axis ratio and the third is the position angle (in
-degrees).
+For example @option{--aperture=2,0.6,30}.
+The aperture will be an ellipse (if the second value is not 1).
+The first number is the semi-major axis, the second is the axis ratio and the 
third is the position angle (in degrees).
+If multiple matches are found within the ellipse, the distance (to find the 
nearest) is calculated along the major axis in the elliptical space, see 
@ref{Defining an ellipse and ellipsoid}.
 @end table
 
-@item 3D
-The aperture (matching volume) can be a sphere, an ellipsoid aligned on the
-three axises or a genenral ellipsoid rotated in any direction. To simplify
-the usage, the shape can be determined based on the number of values given
-to this option.
+@item 3D match
+The aperture (matching volume) can be a sphere, an ellipsoid aligned on the 
three axises or a genenral ellipsoid rotated in any direction.
+To simplifythe usage, the shape can be determined based on the number of 
values given to this option.
 
 @table @asis
 @item 1 number
-For example @option{--aperture=3}. The matching volume will be a sphere of
-the given radius. The value is in the same units as the input coordinates.
+For example @option{--aperture=3}.
+The matching volume will be a sphere of the given radius.
+The value is in the same units as the input coordinates.
 
 @item 3 numbers
-For example @option{--aperture=4,5,6e-10}. The aperture will be a general
-ellipsoid with the respective extent along each dimension. The numbers must
-be in the same units as each axis. This is very similar to the two number
-case of 2D inputs. See there for more.
+For example @option{--aperture=4,5,6e-10}.
+The aperture will be a general ellipsoid with the respective extent along each 
dimension.
+The numbers must be in the same units as each axis.
+This is very similar to the two number case of 2D inputs.
+See there for more.
 
 @item 6 numbers
-For example @option{--aperture=4,0.5,0.6,10,20,30}. The numbers represent
-the full general ellipsoid definition (in any orientation). For the
-definition of a general ellipsoid, see @ref{Defining an ellipse and
-ellipsoid}. The first number is the semi-major axis. The second and third
-are the two axis ratios. The last three are the three Euler angles in units
-of degrees in the ZXZ order as fully described in @ref{Defining an ellipse
-and ellipsoid}.
+For example @option{--aperture=4,0.5,0.6,10,20,30}.
+The numbers represent the full general ellipsoid definition (in any 
orientation).
+For the definition of a general ellipsoid, see @ref{Defining an ellipse and 
ellipsoid}.
+The first number is the semi-major axis.
+The second and third are the two axis ratios.
+The last three are the three Euler angles in units of degrees in the ZXZ order 
as fully described in @ref{Defining an ellipse and ellipsoid}.
 @end table
 
 @end table
@@ -21086,11 +15420,8 @@ and ellipsoid}.
 
 @cindex Fitting
 @cindex Modeling
-In order to fully understand observations after initial analysis on
-the image, it is very important to compare them with the existing
-models to be able to further understand both the models and the
-data. The tools in this chapter create model galaxies and will provide
-2D fittings to be able to understand the detections.
+In order to fully understand observations after initial analysis on the image, 
it is very important to compare them with the existing models to be able to 
further understand both the models and the data.
+The tools in this chapter create model galaxies and will provide 2D fittings 
to be able to understand the detections.
 
 @menu
 * MakeProfiles::                Making mock galaxies and stars.
@@ -21105,59 +15436,40 @@ data. The tools in this chapter create model galaxies 
and will provide
 
 @cindex Checking detection algorithms
 @pindex @r{MakeProfiles (}astmkprof@r{)}
-MakeProfiles will create mock astronomical profiles from a catalog,
-either individually or together in one output image. In data analysis,
-making a mock image can act like a calibration tool, through which you
-can test how successfully your detection technique is able to detect a
-known set of objects. There are commonly two aspects to detecting: the
-detection of the fainter parts of bright objects (which in the case of
-galaxies fade into the noise very slowly) or the complete detection of
-an over-all faint object. Making mock galaxies is the most accurate
-(and idealistic) way these two aspects of a detection algorithm can be
-tested. You also need mock profiles in fitting known functional
-profiles with observations.
-
-MakeProfiles was initially built for extra galactic studies, so
-currently the only astronomical objects it can produce are stars and
-galaxies. We welcome the simulation of any other astronomical
-object. The general outline of the steps that MakeProfiles takes are
-the following:
+MakeProfiles will create mock astronomical profiles from a catalog, either 
individually or together in one output image.
+In data analysis, making a mock image can act like a calibration tool, through 
which you can test how successfully your detection technique is able to detect 
a known set of objects.
+There are commonly two aspects to detecting: the detection of the fainter 
parts of bright objects (which in the case of galaxies fade into the noise very 
slowly) or the complete detection of an over-all faint object.
+Making mock galaxies is the most accurate (and idealistic) way these two 
aspects of a detection algorithm can be tested.
+You also need mock profiles in fitting known functional profiles with 
observations.
+
+MakeProfiles was initially built for extra galactic studies, so currently the 
only astronomical objects it can produce are stars and galaxies.
+We welcome the simulation of any other astronomical object.
+The general outline of the steps that MakeProfiles takes are the following:
 
 @enumerate
 
 @item
-Build the full profile out to its truncation radius in a possibly
-over-sampled array.
+Build the full profile out to its truncation radius in a possibly over-sampled 
array.
 
 @item
-Multiply all the elements by a fixed constant so its total magnitude
-equals the desired total magnitude.
+Multiply all the elements by a fixed constant so its total magnitude equals 
the desired total magnitude.
 
 @item
-If @option{--individual} is called, save the array for each profile to
-a FITS file.
+If @option{--individual} is called, save the array for each profile to a FITS 
file.
 
 @item
-If @option{--nomerged} is not called, add the overlapping pixels of
-all the created profiles to the output image and abort.
+If @option{--nomerged} is not called, add the overlapping pixels of all the 
created profiles to the output image and abort.
 
 @end enumerate
 
-Using input values, MakeProfiles adds the World Coordinate System
-(WCS) headers of the FITS standard to all its outputs (except PSF
-images!). For a simple test on a set of mock galaxies in one image,
-there is no need for the third step or the WCS information.
+Using input values, MakeProfiles adds the World Coordinate System (WCS) 
headers of the FITS standard to all its outputs (except PSF images!).
+For a simple test on a set of mock galaxies in one image, there is no need for 
the third step or the WCS information.
 
 @cindex Transform image
 @cindex Lensing simulations
 @cindex Image transformations
-However in complicated simulations like weak lensing simulations,
-where each galaxy undergoes various types of individual
-transformations based on their position, those transformations can be
-applied to the different individual images with other programs. After
-all the transformations are applied, using the WCS information in each
-individual profile image, they can be merged into one output image for
-convolution and adding noise.
+However in complicated simulations like weak lensing simulations, where each 
galaxy undergoes various types of individual transformations based on their 
position, those transformations can be applied to the different individual 
images with other programs.
+After all the transformations are applied, using the WCS information in each 
individual profile image, they can be merged into one output image for 
convolution and adding noise.
 
 @menu
 * Modeling basics::             Astronomical modeling basics.
@@ -21172,10 +15484,8 @@ convolution and adding noise.
 @node Modeling basics, If convolving afterwards, MakeProfiles, MakeProfiles
 @subsection Modeling basics
 
-In the subsections below, first a review of some very basic
-information and concepts behind modeling a real astronomical image is
-given. You can skip this subsection if you are already sufficiently
-familiar with these concepts.
+In the subsections below, first a review of some very basic information and 
concepts behind modeling a real astronomical image is given.
+You can skip this subsection if you are already sufficiently familiar with 
these concepts.
 
 @menu
 * Defining an ellipse and ellipsoid::  Definition of these important shapes.
@@ -21192,64 +15502,41 @@ familiar with these concepts.
 @cindex Ellipse
 @cindex Axis ratio
 @cindex Position angle
-The PSF, see @ref{PSF}, and galaxy radial profiles are generally defined on
-an ellipse. Therefore, in this section we'll start defining an ellipse on a
-pixelated 2D surface. Labeling the major axis of an ellipse @mymath{a}, and
-its minor axis with @mymath{b}, the @emph{axis ratio} is defined as:
-@mymath{q\equiv b/a}. The major axis of an ellipse can be aligned in any
-direction, therefore the angle of the major axis with respect to the
-horizontal axis of the image is defined to be the @emph{position angle} of
-the ellipse and in this book, we show it with @mymath{\theta}.
+The PSF, see @ref{PSF}, and galaxy radial profiles are generally defined on an 
ellipse.
+Therefore, in this section we'll start defining an ellipse on a pixelated 2D 
surface.
+Labeling the major axis of an ellipse @mymath{a}, and its minor axis with 
@mymath{b}, the @emph{axis ratio} is defined as: @mymath{q\equiv b/a}.
+The major axis of an ellipse can be aligned in any direction, therefore the 
angle of the major axis with respect to the horizontal axis of the image is 
defined to be the @emph{position angle} of the ellipse and in this book, we 
show it with @mymath{\theta}.
 
 @cindex Radial profile on ellipse
-Our aim is to put a radial profile of any functional form @mymath{f(r)}
-over an ellipse. Hence we need to associate a radius/distance to every
-point in space. Let's define the radial distance @mymath{r_{el}} as the
-distance on the major axis to the center of an ellipse which is located at
-@mymath{i_c} and @mymath{j_c} (in other words @mymath{r_{el}\equiv{a}}). We
-want to find @mymath{r_{el}} of a point located at @mymath{(i,j)} (in the
-image coordinate system) from the center of the ellipse with axis ratio
-@mymath{q} and position angle @mymath{\theta}. First the coordinate system
-is rotated@footnote{Do not confuse the signs of @mymath{sin} with the
-rotation matrix defined in @ref{Warping basics}. In that equation, the
-point is rotated, here the coordinates are rotated and the point is fixed.}
-by @mymath{\theta} to get the new rotated coordinates of that point
-@mymath{(i_r,j_r)}:
+Our aim is to put a radial profile of any functional form @mymath{f(r)} over 
an ellipse.
+Hence we need to associate a radius/distance to every point in space.
+Let's define the radial distance @mymath{r_{el}} as the distance on the major 
axis to the center of an ellipse which is located at @mymath{i_c} and 
@mymath{j_c} (in other words @mymath{r_{el}\equiv{a}}).
+We want to find @mymath{r_{el}} of a point located at @mymath{(i,j)} (in the 
image coordinate system) from the center of the ellipse with axis ratio 
@mymath{q} and position angle @mymath{\theta}.
+First the coordinate system is rotated@footnote{Do not confuse the signs of 
@mymath{sin} with the rotation matrix defined in @ref{Warping basics}.
+In that equation, the point is rotated, here the coordinates are rotated and 
the point is fixed.} by @mymath{\theta} to get the new rotated coordinates of 
that point @mymath{(i_r,j_r)}:
 
 @dispmath{i_r(i,j)=+(i_c-i)\cos\theta+(j_c-j)\sin\theta}
 @dispmath{j_r(i,j)=-(i_c-i)\sin\theta+(j_c-j)\cos\theta}
 
 @cindex Elliptical distance
-@noindent Recall that an ellipse is defined by @mymath{(i_r/a)^2+(j_r/b)^2=1}
-and that we defined @mymath{r_{el}\equiv{a}}. Hence, multiplying all
-elements of the the ellipse definition with @mymath{r_{el}^2} we get the
-elliptical distance at this point point located:
-@mymath{r_{el}=\sqrt{i_r^2+(j_r/q)^2}}. To place the radial profiles
-explained below over an ellipse, @mymath{f(r_{el})} is calculated based on
-the functional radial profile desired.
+@noindent Recall that an ellipse is defined by @mymath{(i_r/a)^2+(j_r/b)^2=1} 
and that we defined @mymath{r_{el}\equiv{a}}.
+Hence, multiplying all elements of the ellipse definition with 
@mymath{r_{el}^2} we get the elliptical distance at this point point located: 
@mymath{r_{el}=\sqrt{i_r^2+(j_r/q)^2}}.
+To place the radial profiles explained below over an ellipse, 
@mymath{f(r_{el})} is calculated based on the functional radial profile desired.
 
 @cindex Ellipsoid
 @cindex Euler angles
-An ellipse in 3D, or an @url{https://en.wikipedia.org/wiki/Ellipsoid,
-ellipsoid}, can be defined following similar principles as before. Labeling
-the major (largest) axis length as @mymath{a}, the second and third (in a
-right-handed coordinate system) axis lengths can be labeled as @mymath{b}
-and @mymath{c}. Hence we have two axis ratios: @mymath{q_1\equiv{b/a}} and
-@mymath{q_2\equiv{c/a}}. The orientation of the ellipsoid can be defined
-from the orientation of its major axis. There are many ways to define 3D
-orientation and order matters. So to be clear, here we use the ZXZ (or
-@mymath{Z_1X_2Z_3}) proper @url{https://en.wikipedia.org/wiki/Euler_angles,
-Euler angles} to define the 3D orientation. In short, when a point is
-rotated in this order, we first rotate it around the Z axis (third axis) by
-@mymath{\alpha}, then about the (rotated) X axis by @mymath{\beta} and
-finally about the (rotated) Z axis by @mymath{\gamma}.
-
-Following the discussion in @ref{Merging multiple warpings}, we can define
-the full rotation with the following matrix multiplication. However, here
-we are rotating the coordinates, not the point. Therefore, both the
-rotation angles and rotation order are reversed. We are also not using
-homogeneous coordinates (see @ref{Warping basics}) since we aren't
-concerned with translation in this context:
+An ellipse in 3D, or an @url{https://en.wikipedia.org/wiki/Ellipsoid, 
ellipsoid}, can be defined following similar principles as before.
+Labeling the major (largest) axis length as @mymath{a}, the second and third 
(in a right-handed coordinate system) axis lengths can be labeled as @mymath{b} 
and @mymath{c}.
+Hence we have two axis ratios: @mymath{q_1\equiv{b/a}} and 
@mymath{q_2\equiv{c/a}}.
+The orientation of the ellipsoid can be defined from the orientation of its 
major axis.
+There are many ways to define 3D orientation and order matters.
+So to be clear, here we use the ZXZ (or @mymath{Z_1X_2Z_3}) proper 
@url{https://en.wikipedia.org/wiki/Euler_angles, Euler angles} to define the 3D 
orientation.
+In short, when a point is rotated in this order, we first rotate it around the 
Z axis (third axis) by @mymath{\alpha}, then about the (rotated) X axis by 
@mymath{\beta} and finally about the (rotated) Z axis by @mymath{\gamma}.
+
+Following the discussion in @ref{Merging multiple warpings}, we can define the 
full rotation with the following matrix multiplication.
+However, here we are rotating the coordinates, not the point.
+Therefore, both the rotation angles and rotation order are reversed.
+We are also not using homogeneous coordinates (see @ref{Warping basics}) since 
we aren't concerned with translation in this context:
 
 @dispmath{\left[\matrix{i_r\cr j_r\cr k_r}\right] =
           \left[\matrix{cos\gamma&sin\gamma&0\cr -sin\gamma&cos\gamma&0\cr     
            0&0&1}\right]
@@ -21267,24 +15554,16 @@ Recall that an ellipsoid can be characterized with
 @cindex Inside-out construction
 @cindex Making profiles pixel by pixel
 @cindex Pixel by pixel making of profiles
-MakeProfiles builds the profile starting from the nearest element (pixel in
-an image) in the dataset to the profile center. The profile value is
-calculated for that central pixel using monte carlo integration, see
-@ref{Sampling from a function}. The next pixel is the next nearest neighbor
-to the central pixel as defined by @mymath{r_{el}}. This process goes on
-until the profile is fully built upto the truncation radius. This is done
-fairly efficiently using a breadth first parsing
-strategy@footnote{@url{http://en.wikipedia.org/wiki/Breadth-first_search}}
-which is implemented through an ordered linked list.
-
-Using this approach, we build the profile by expanding the
-circumference. Not one more extra pixel has to be checked (the calculation
-of @mymath{r_{el}} from above is not cheap in CPU terms). Another
-consequence of this strategy is that extending MakeProfiles to three
-dimensions becomes very simple: only the neighbors of each pixel have to be
-changed. Everything else after that (when the pixel index and its radial
-profile have entered the linked list) is the same, no matter the number of
-dimensions we are dealing with.
+MakeProfiles builds the profile starting from the nearest element (pixel in an 
image) in the dataset to the profile center.
+The profile value is calculated for that central pixel using monte carlo 
integration, see @ref{Sampling from a function}.
+The next pixel is the next nearest neighbor to the central pixel as defined by 
@mymath{r_{el}}.
+This process goes on until the profile is fully built upto the truncation 
radius.
+This is done fairly efficiently using a breadth first parsing 
strategy@footnote{@url{http://en.wikipedia.org/wiki/Breadth-first_search}} 
which is implemented through an ordered linked list.
+
+Using this approach, we build the profile by expanding the circumference.
+Not one more extra pixel has to be checked (the calculation of @mymath{r_{el}} 
from above is not cheap in CPU terms).
+Another consequence of this strategy is that extending MakeProfiles to three 
dimensions becomes very simple: only the neighbors of each pixel have to be 
changed.
+Everything else after that (when the pixel index and its radial profile have 
entered the linked list) is the same, no matter the number of dimensions we are 
dealing with.
 
 
 
@@ -21296,29 +15575,24 @@ dimensions we are dealing with.
 @cindex Diffraction limited
 @cindex Point spread function
 @cindex Spread of a point source
-Assume we have a `point' source, or a source that is far smaller
-than the maximum resolution (a pixel). When we take an image of it, it
-will `spread' over an area. To quantify that spread, we can define a
-`function'. This is how the point spread function or the PSF of an
-image is defined. This `spread' can have various causes, for example
-in ground based astronomy, due to the atmosphere. In practice we can
-never surpass the `spread' due to the diffraction of the lens
-aperture. Various other effects can also be quantified through a PSF.
-For example, the simple fact that we are sampling in a discrete space,
-namely the pixels, also produces a very small `spread' in the image.
+Assume we have a `point' source, or a source that is far smaller than the 
maximum resolution (a pixel).
+When we take an image of it, it will `spread' over an area.
+To quantify that spread, we can define a `function'.
+This is how the point spread function or the PSF of an image is defined.
+This `spread' can have various causes, for example in ground based astronomy, 
due to the atmosphere.
+In practice we can never surpass the `spread' due to the diffraction of the 
lens aperture.
+Various other effects can also be quantified through a PSF.
+For example, the simple fact that we are sampling in a discrete space, namely 
the pixels, also produces a very small `spread' in the image.
 
 @cindex Blur image
 @cindex Convolution
 @cindex Image blurring
 @cindex PSF image size
-Convolution is the mathematical process by which we can apply a `spread' to
-an image, or in other words blur the image, see @ref{Convolution
-process}. The Brightness of an object should remain unchanged after
-convolution, see @ref{Flux Brightness and magnitude}. Therefore, it is
-important that the sum of all the pixels of the PSF be unity. The PSF image
-also has to have an odd number of pixels on its sides so one pixel can be
-defined as the center. In MakeProfiles, the PSF can be set by the two
-methods explained below.
+Convolution is the mathematical process by which we can apply a `spread' to an 
image, or in other words blur the image, see @ref{Convolution process}.
+The Brightness of an object should remain unchanged after convolution, see 
@ref{Flux Brightness and magnitude}.
+Therefore, it is important that the sum of all the pixels of the PSF be unity.
+The PSF image also has to have an odd number of pixels on its sides so one 
pixel can be defined as the center.
+In MakeProfiles, the PSF can be set by the two methods explained below.
 
 @table @asis
 
@@ -21327,47 +15601,37 @@ methods explained below.
 @cindex PSF width
 @cindex Parametric PSFs
 @cindex Full Width at Half Maximum
-A known mathematical function is used to make the PSF. In this case,
-only the parameters to define the functions are necessary and
-MakeProfiles will make a PSF based on the given parameters for each
-function. In both cases, the center of the profile has to be exactly
-in the middle of the central pixel of the PSF (which is automatically
-done by MakeProfiles). When talking about the PSF, usually, the full
-width at half maximum or FWHM is used as a scale of the width of the
-PSF.
+A known mathematical function is used to make the PSF.
+In this case, only the parameters to define the functions are necessary and 
MakeProfiles will make a PSF based on the given parameters for each function.
+In both cases, the center of the profile has to be exactly in the middle of 
the central pixel of the PSF (which is automatically done by MakeProfiles).
+When talking about the PSF, usually, the full width at half maximum or FWHM is 
used as a scale of the width of the PSF.
 
 @table @cite
 @item Gaussian
 @cindex Gaussian distribution
-In the older papers, and to a lesser extent even today, some
-researchers use the 2D Gaussian function to approximate the PSF of
-ground based images. In its most general form, a Gaussian function can
-be written as:
+In the older papers, and to a lesser extent even today, some researchers use 
the 2D Gaussian function to approximate the PSF of ground based images.
+In its most general form, a Gaussian function can be written as:
 
 @dispmath{f(r)=a \exp \left( -(x-\mu)^2 \over 2\sigma^2 \right)+d}
 
-Since the center of the profile is pre-defined, @mymath{\mu} and
-@mymath{d} are constrained. @mymath{a} can also be found because the
-function has to be normalized. So the only important parameter for
-MakeProfiles is the @mymath{\sigma}. In the Gaussian function we have
-this relation between the FWHM and @mymath{\sigma}:
+Since the center of the profile is pre-defined, @mymath{\mu} and @mymath{d} 
are constrained.
+@mymath{a} can also be found because the function has to be normalized.
+So the only important parameter for MakeProfiles is the @mymath{\sigma}.
+In the Gaussian function we have this relation between the FWHM and 
@mymath{\sigma}:
 
 @cindex Gaussian FWHM
 @dispmath{\rm{FWHM}_g=2\sqrt{2\ln{2}}\sigma \approx 2.35482\sigma}
 
 @item Moffat
 @cindex Moffat function
-The Gaussian profile is much sharper than the images taken from stars
-on photographic plates or CCDs. Therefore in 1969, Moffat proposed
-this functional form for the image of stars:
+The Gaussian profile is much sharper than the images taken from stars on 
photographic plates or CCDs.
+Therefore in 1969, Moffat proposed this functional form for the image of stars:
 
 @dispmath{f(r)=a \left[ 1+\left( r\over \alpha \right)^2 \right]^{-\beta}}
 
 @cindex Moffat beta
-Again, @mymath{a} is constrained by the normalization, therefore two
-parameters define the shape of the Moffat function: @mymath{\alpha} and
-@mymath{\beta}. The radial parameter is @mymath{\alpha} which is related
-to the FWHM by
+Again, @mymath{a} is constrained by the normalization, therefore two 
parameters define the shape of the Moffat function: @mymath{\alpha} and 
@mymath{\beta}.
+The radial parameter is @mymath{\alpha} which is related to the FWHM by
 
 @cindex Moffat FWHM
 @dispmath{\rm{FWHM}_m=2\alpha\sqrt{2^{1/\beta}-1}}
@@ -21375,36 +15639,29 @@ to the FWHM by
 @cindex Compare Moffat and Gaussian
 @cindex PSF, Moffat compared Gaussian
 @noindent
-Comparing with the PSF predicted from atmospheric turbulence theory
-with a Moffat function, Trujillo et al.@footnote{Trujillo, I.,
-J. A. L. Aguerri, J. Cepa, and C. M. Gutierrez (2001). ``The effects
-of seeing on S@'ersic profiles - II. The Moffat PSF''. In: MNRAS 328,
-pp. 977---985.} claim that @mymath{\beta} should be 4.765. They also
-show how the Moffat PSF contains the Gaussian PSF as a limiting case
-when @mymath{\beta\to\infty}.
+Comparing with the PSF predicted from atmospheric turbulence theory with a 
Moffat function, Trujillo et al.@footnote{
+Trujillo, I., J. A. L. Aguerri, J. Cepa, and C. M. Gutierrez (2001). ``The 
effects of seeing on S@'ersic profiles - II. The Moffat PSF''. In: MNRAS 328, 
pp. 977---985.}
+claim that @mymath{\beta} should be 4.765.
+They also show how the Moffat PSF contains the Gaussian PSF as a limiting case 
when @mymath{\beta\to\infty}.
 
 @end table
 
 @item An input FITS image
-An input image file can also be specified to be used as a PSF. If the
-sum of its pixels are not equal to 1, the pixels will be multiplied
-by a fraction so the sum does become 1.
+An input image file can also be specified to be used as a PSF.
+If the sum of its pixels are not equal to 1, the pixels will be multiplied by 
a fraction so the sum does become 1.
 @end table
 
 
-While the Gaussian is only dependent on the FWHM, the Moffat function
-is also dependent on @mymath{\beta}. Comparing these two functions
-with a fixed FWHM gives the following results:
+While the Gaussian is only dependent on the FWHM, the Moffat function is also 
dependent on @mymath{\beta}.
+Comparing these two functions with a fixed FWHM gives the following results:
 
 @itemize
 @item
 Within the FWHM, the functions don't have significant differences.
 @item
-For a fixed FWHM, as @mymath{\beta} increases, the Moffat function
-becomes sharper.
+For a fixed FWHM, as @mymath{\beta} increases, the Moffat function becomes 
sharper.
 @item
-The Gaussian function is much sharper than the Moffat functions, even
-when @mymath{\beta} is large.
+The Gaussian function is much sharper than the Moffat functions, even when 
@mymath{\beta} is large.
 @end itemize
 
 
@@ -21415,15 +15672,10 @@ when @mymath{\beta} is large.
 
 @cindex Modeling stars
 @cindex Stars, modeling
-In MakeProfiles, stars are generally considered to be a point
-source. This is usually the case for extra galactic studies, were
-nearby stars are also in the field. Since a star is only a point
-source, we assume that it only fills one pixel prior to
-convolution. In fact, exactly for this reason, in astronomical images
-the light profiles of stars are one of the best methods to understand
-the shape of the PSF and a very large fraction of scientific research
-is preformed by assuming the shapes of stars to be the PSF of the
-image.
+In MakeProfiles, stars are generally considered to be a point source.
+This is usually the case for extra galactic studies, were nearby stars are 
also in the field.
+Since a star is only a point source, we assume that it only fills one pixel 
prior to convolution.
+In fact, exactly for this reason, in astronomical images the light profiles of 
stars are one of the best methods to understand the shape of the PSF and a very 
large fraction of scientific research is preformed by assuming the shapes of 
stars to be the PSF of the image.
 
 
 
@@ -21436,9 +15688,7 @@ image.
 @cindex S@'ersic profile
 @cindex Profiles, galaxies
 @cindex Generalized de Vaucouleur profile
-Today, most practitioners agree that the flux of galaxies can be
-modeled with one or a few generalized de Vaucouleur's (or S@'ersic)
-profiles.
+Today, most practitioners agree that the flux of galaxies can be modeled with 
one or a few generalized de Vaucouleur's (or S@'ersic) profiles.
 
 @dispmath{I(r) = I_e \exp \left ( -b_n \left[ \left( r \over r_e \right)^{1/n} 
-1 \right] \right )}
 
@@ -21449,22 +15699,12 @@ profiles.
 @cindex Radius, effective
 @cindex de Vaucouleur profile
 @cindex G@'erard de Vaucouleurs
-G@'erard de Vaucouleurs (1918-1995) was first to show in 1948 that
-this function best fits the galaxy light profiles, with the only
-difference that he held @mymath{n} fixed to a value of 4. 20 years
-later in 1968, J. L. S@'ersic showed that @mymath{n} can have a
-variety of values and does not necessarily need to be 4. This profile
-depends on the effective radius (@mymath{r_e}) which is defined as the
-radius which contains half of the profile brightness (see @ref{Profile
-magnitude}). @mymath{I_e} is the flux at the effective radius.  The
-S@'ersic index @mymath{n} is used to define the concentration of the
-profile within @mymath{r_e} and @mymath{b_n} is a constant dependent
-on @mymath{n}. MacArthur et al.@footnote{MacArthur, L. A.,
-S. Courteau, and J. A. Holtzman (2003). ``Structure of Disk-dominated
-Galaxies. I. Bulge/Disk Parameters, Simulations, and Secular
-Evolution''. In: ApJ 582, pp. 689---722.} show that for
-@mymath{n>0.35}, @mymath{b_n} can be accurately approximated using
-this equation:
+G@'erard de Vaucouleurs (1918-1995) was first to show in 1948 that this 
function best fits the galaxy light profiles, with the only difference that he 
held @mymath{n} fixed to a value of 4.
+20 years later in 1968, J. L. S@'ersic showed that @mymath{n} can have a 
variety of values and does not necessarily need to be 4.
+This profile depends on the effective radius (@mymath{r_e}) which is defined 
as the radius which contains half of the profile brightness (see @ref{Profile 
magnitude}).
+@mymath{I_e} is the flux at the effective radius.
+The S@'ersic index @mymath{n} is used to define the concentration of the 
profile within @mymath{r_e} and @mymath{b_n} is a constant dependent on 
@mymath{n}.
+MacArthur et al.@footnote{MacArthur, L. A., S. Courteau, and J. A. Holtzman 
(2003). ``Structure of Disk-dominated Galaxies. I. Bulge/Disk Parameters, 
Simulations, and Secular Evolution''. In: ApJ 582, pp. 689---722.} show that 
for @mymath{n>0.35}, @mymath{b_n} can be accurately approximated using this 
equation:
 
 @dispmath{b_n=2n - {1\over 3} + {4\over 405n} + {46\over 25515n^2} + {131\over 
1148175n^3}-{2194697\over 30690717750n^4}}
 
@@ -21476,115 +15716,75 @@ this equation:
 @subsubsection Sampling from a function
 
 @cindex Sampling
-A pixel is the ultimate level of accuracy to gather data, we can't get
-any more accurate in one image, this is known as sampling in signal
-processing. However, the mathematical profiles which describe our
-models have infinite accuracy. Over a large fraction of the area of
-astrophysically interesting profiles (for example galaxies or PSFs),
-the variation of the profile over the area of one pixel is not too
-significant. In such cases, the elliptical radius (@mymath{r_{el}} of
-the center of the pixel can be assigned as the final value of the
-pixel, see @ref{Defining an ellipse and ellipsoid}).
+A pixel is the ultimate level of accuracy to gather data, we can't get any 
more accurate in one image, this is known as sampling in signal processing.
+However, the mathematical profiles which describe our models have infinite 
accuracy.
+Over a large fraction of the area of astrophysically interesting profiles (for 
example galaxies or PSFs), the variation of the profile over the area of one 
pixel is not too significant.
+In such cases, the elliptical radius (@mymath{r_{el}} of the center of the 
pixel can be assigned as the final value of the pixel, see @ref{Defining an 
ellipse and ellipsoid}).
 
 @cindex Integration over pixel
 @cindex Gradient over pixel area
 @cindex Function gradient over pixel area
-As you approach their center, some galaxies become very sharp (their
-value significantly changes over one pixel's area). This sharpness
-increases with smaller effective radius and larger S@'ersic values.
-Thus rendering the central value extremely inaccurate. The first
-method that comes to mind for solving this problem is integration. The
-functional form of the profile can be integrated over the pixel area
-in a 2D integration process. However, unfortunately numerical
-integration techniques also have their limitations and when such sharp
-profiles are needed they can become extremely inaccurate.
+As you approach their center, some galaxies become very sharp (their value 
significantly changes over one pixel's area).
+This sharpness increases with smaller effective radius and larger S@'ersic 
values.
+Thus rendering the central value extremely inaccurate.
+The first method that comes to mind for solving this problem is integration.
+The functional form of the profile can be integrated over the pixel area in a 
2D integration process.
+However, unfortunately numerical integration techniques also have their 
limitations and when such sharp profiles are needed they can become extremely 
inaccurate.
 
 @cindex Monte carlo integration
-The most accurate method of sampling a continuous profile on a
-discrete space is by choosing a large number of random points within
-the boundaries of the pixel and taking their average value (or Monte
-Carlo integration). This is also, generally speaking, what happens in
-practice with the photons on the pixel. The number of random points
-can be set with @option{--numrandom}.
-
-Unfortunately, repeating this Monte Carlo process would be extremely time
-and CPU consuming if it is to be applied to every pixel. In order to not
-loose too much accuracy, in MakeProfiles, the profile is built using both
-methods explained below. The building of the profile begins from its
-central pixel and continues (radially) outwards. Monte Carlo integration is
-first applied (which yields @mymath{F_r}), then the central pixel value
-(@mymath{F_c}) is calculated on the same pixel. If the fractional
-difference (@mymath{|F_r-F_c|/F_r}) is lower than a given tolerance level
-(specified with @option{--tolerance}) MakeProfiles will stop using Monte
-Carlo integration and only use the central pixel value.
+The most accurate method of sampling a continuous profile on a discrete space 
is by choosing a large number of random points within the boundaries of the 
pixel and taking their average value (or Monte Carlo integration).
+This is also, generally speaking, what happens in practice with the photons on 
the pixel.
+The number of random points can be set with @option{--numrandom}.
+
+Unfortunately, repeating this Monte Carlo process would be extremely time and 
CPU consuming if it is to be applied to every pixel.
+In order to not loose too much accuracy, in MakeProfiles, the profile is built 
using both methods explained below.
+The building of the profile begins from its central pixel and continues 
(radially) outwards.
+Monte Carlo integration is first applied (which yields @mymath{F_r}), then the 
central pixel value (@mymath{F_c}) is calculated on the same pixel.
+If the fractional difference (@mymath{|F_r-F_c|/F_r}) is lower than a given 
tolerance level (specified with @option{--tolerance}) MakeProfiles will stop 
using Monte Carlo integration and only use the central pixel value.
 
 @cindex Inside-out construction
-The ordering of the pixels in this inside-out construction is based on
-@mymath{r=\sqrt{(i_c-i)^2+(j_c-j)^2}}, not @mymath{r_{el}}, see
-@ref{Defining an ellipse and ellipsoid}. When the axis ratios are large
-(near one) this is fine. But when they are small and the object is highly
-elliptical, it might seem more reasonable to follow @mymath{r_{el}} not
-@mymath{r}. The problem is that the gradient is stronger in pixels with
-smaller @mymath{r} (and larger @mymath{r_{el}}) than those with smaller
-@mymath{r_{el}}. In other words, the gradient is strongest along the minor
-axis. So if the next pixel is chosen based on @mymath{r_{el}}, the
-tolerance level will be reached sooner and lots of pixels with large
-fractional differences will be missed.
-
-Monte Carlo integration uses a random number of points. Thus,
-every time you run it, by default, you will get a different
-distribution of points to sample within the pixel. In the case of
-large profiles, this will result in a slight difference of the pixels
-which use Monte Carlo integration each time MakeProfiles is run. To
-have a deterministic result, you have to fix the random number
-generator properties which is used to build the random
-distribution. This can be done by setting the @code{GSL_RNG_TYPE} and
-@code{GSL_RNG_SEED} environment variables and calling MakeProfiles
-with the @option{--envseed} option. To learn more about the process of
-generating random numbers, see @ref{Generating random numbers}.
+The ordering of the pixels in this inside-out construction is based on 
@mymath{r=\sqrt{(i_c-i)^2+(j_c-j)^2}}, not @mymath{r_{el}}, see @ref{Defining 
an ellipse and ellipsoid}.
+When the axis ratios are large (near one) this is fine.
+But when they are small and the object is highly elliptical, it might seem 
more reasonable to follow @mymath{r_{el}} not @mymath{r}.
+The problem is that the gradient is stronger in pixels with smaller @mymath{r} 
(and larger @mymath{r_{el}}) than those with smaller @mymath{r_{el}}.
+In other words, the gradient is strongest along the minor axis.
+So if the next pixel is chosen based on @mymath{r_{el}}, the tolerance level 
will be reached sooner and lots of pixels with large fractional differences 
will be missed.
+
+Monte Carlo integration uses a random number of points.
+Thus, every time you run it, by default, you will get a different distribution 
of points to sample within the pixel.
+In the case of large profiles, this will result in a slight difference of the 
pixels which use Monte Carlo integration each time MakeProfiles is run.
+To have a deterministic result, you have to fix the random number generator 
properties which is used to build the random distribution.
+This can be done by setting the @code{GSL_RNG_TYPE} and @code{GSL_RNG_SEED} 
environment variables and calling MakeProfiles with the @option{--envseed} 
option.
+To learn more about the process of generating random numbers, see 
@ref{Generating random numbers}.
 
 @cindex Seed, Random number generator
 @cindex Random number generator, Seed
-The seed values are fixed for every profile: with @option{--envseed},
-all the profiles have the same seed and without it, each will get a
-different seed using the system clock (which is accurate to within one
-microsecond). The same seed will be used to generate a random number
-for all the sub-pixel positions of all the profiles. So in the former,
-the sub-pixel points checked for all the pixels undergoing Monte carlo
-integration in all profiles will be identical. In other words, the
-sub-pixel points in the first (closest to the center) pixel of all the
-profiles will be identical with each other. All the second pixels
-studied for all the profiles will also receive an identical (different
-from the first pixel) set of sub-pixel points and so on. As long as
-the number of random points used is large enough or the profiles are
-not identical, this should not cause any systematic bias.
+The seed values are fixed for every profile: with @option{--envseed}, all the 
profiles have the same seed and without it, each will get a different seed 
using the system clock (which is accurate to within one microsecond).
+The same seed will be used to generate a random number for all the sub-pixel 
positions of all the profiles.
+So in the former, the sub-pixel points checked for all the pixels undergoing 
Monte carlo integration in all profiles will be identical.
+In other words, the sub-pixel points in the first (closest to the center) 
pixel of all the profiles will be identical with each other.
+All the second pixels studied for all the profiles will also receive an 
identical (different from the first pixel) set of sub-pixel points and so on.
+As long as the number of random points used is large enough or the profiles 
are not identical, this should not cause any systematic bias.
 
 
 @node Oversampling,  , Sampling from a function, Modeling basics
 @subsubsection Oversampling
 
 @cindex Oversampling
-The steps explained in @ref{Sampling from a function} do give an
-accurate representation of a profile prior to convolution. However, in
-an actual observation, the image is first convolved with or blurred by
-the atmospheric and instrument PSF in a continuous space and then it
-is sampled on the discrete pixels of the camera.
+The steps explained in @ref{Sampling from a function} do give an accurate 
representation of a profile prior to convolution.
+However, in an actual observation, the image is first convolved with or 
blurred by the atmospheric and instrument PSF in a continuous space and then it 
is sampled on the discrete pixels of the camera.
 
 @cindex PSF over-sample
-In order to more accurately simulate this process, the unconvolved image
-and the PSF are created on a finer pixel grid. In other words, the output
-image is a certain odd-integer multiple of the desired size, we can call
-this `oversampling'. The user can specify this multiple as a command-line
-option. The reason this has to be an odd number is that the PSF has to be
-centered on the center of its image. An image with an even number of pixels
-on each side does not have a central pixel.
+In order to more accurately simulate this process, the unconvolved image and 
the PSF are created on a finer pixel grid.
+In other words, the output image is a certain odd-integer multiple of the 
desired size, we can call this `oversampling'.
+The user can specify this multiple as a command-line option.
+The reason this has to be an odd number is that the PSF has to be centered on 
the center of its image.
+An image with an even number of pixels on each side does not have a central 
pixel.
 
-The image can then be convolved with the PSF (which should also be
-oversampled on the same scale). Finally, image can be sub-sampled to
-get to the initial desired pixel size of the output image. After this,
-mock noise can be added as explained in the next section. This is
-because unlike the PSF, the noise occurs in each output pixel, not on
-a continuous space like all the prior steps.
+The image can then be convolved with the PSF (which should also be oversampled 
on the same scale).
+Finally, image can be sub-sampled to get to the initial desired pixel size of 
the output image.
+After this, mock noise can be added as explained in the next section.
+This is because unlike the PSF, the noise occurs in each output pixel, not on 
a continuous space like all the prior steps.
 
 
 
@@ -21592,24 +15792,15 @@ a continuous space like all the prior steps.
 @node If convolving afterwards, Flux Brightness and magnitude, Modeling 
basics, MakeProfiles
 @subsection If convolving afterwards
 
-In case you want to convolve the image later with a given point spread
-function, make sure to use a larger image size. After convolution, the
-profiles become larger and a profile that is normally completely
-outside of the image might fall within it.
+In case you want to convolve the image later with a given point spread 
function, make sure to use a larger image size.
+After convolution, the profiles become larger and a profile that is normally 
completely outside of the image might fall within it.
 
-On one axis, if you want your final (convolved) image to be @mymath{m}
-pixels and your PSF is @mymath{2n+1} pixels wide, then when calling
-MakeProfiles, set the axis size to @mymath{m+2n}, not @mymath{m}. You
-also have to shift all the pixel positions of the profile centers on
-the that axis by @mymath{n} pixels to the positive.
+On one axis, if you want your final (convolved) image to be @mymath{m} pixels 
and your PSF is @mymath{2n+1} pixels wide, then when calling MakeProfiles, set 
the axis size to @mymath{m+2n}, not @mymath{m}.
+You also have to shift all the pixel positions of the profile centers on the 
that axis by @mymath{n} pixels to the positive.
 
-After convolution, you can crop the outer @mymath{n} pixels with the
-section crop box specification of Crop: @option{--section=n:*-n,n:*-n}
-assuming your PSF is a square, see @ref{Crop section syntax}. This will
-also remove all discrete Fourier transform artifacts (blurred sides) from
-the final image. To facilitate this shift, MakeProfiles has the options
-@option{--xshift}, @option{--yshift} and @option{--prepforconv}, see
-@ref{Invoking astmkprof}.
+After convolution, you can crop the outer @mymath{n} pixels with the section 
crop box specification of Crop: @option{--section=n:*-n,n:*-n} assuming your 
PSF is a square, see @ref{Crop section syntax}.
+This will also remove all discrete Fourier transform artifacts (blurred sides) 
from the final image.
+To facilitate this shift, MakeProfiles has the options @option{--xshift}, 
@option{--yshift} and @option{--prepforconv}, see @ref{Invoking astmkprof}.
 
 
 
@@ -21620,75 +15811,44 @@ the final image. To facilitate this shift, 
MakeProfiles has the options
 @cindex ADU
 @cindex Gain
 @cindex Counts
-Astronomical data pixels are usually in units of
-counts@footnote{Counts are also known as analog to digital units
-(ADU).} or electrons or either one divided by seconds. To convert from
-the counts to electrons, you will need to know the instrument gain. In
-any case, they can be directly converted to energy or energy/time
-using the basic hardware (telescope, camera and filter)
-information. We will continue the discussion assuming the pixels are
-in units of energy/time.
+Astronomical data pixels are usually in units of counts@footnote{Counts are 
also known as analog to digital units (ADU).} or electrons or either one 
divided by seconds.
+To convert from the counts to electrons, you will need to know the instrument 
gain.
+In any case, they can be directly converted to energy or energy/time using the 
basic hardware (telescope, camera and filter) information.
+We will continue the discussion assuming the pixels are in units of 
energy/time.
 
 @cindex Flux
 @cindex Luminosity
 @cindex Brightness
-The @emph{brightness} of an object is defined as its total detected
-energy per time. This is simply the sum of the pixels that are
-associated with that detection by our detection tool for example
-@ref{NoiseChisel}@footnote{If further processing is done, for example
-the Kron or Petrosian radii are calculated, then the detected area is
-not sufficient and the total area that was within the respective
-radius must be used.}. The @emph{flux} of an object is in units of
-energy/time/area and for a detected object, it is defined as its
-brightness divided by the area used to collect the light from the
-source or the telescope aperture (for example in
-@mymath{cm^2})@footnote{For a full object that spans over several
-pixels, the telescope area should be used to find the flux. However,
-sometimes, only the brightness per pixel is desired. In such cases
-this book also @emph{loosely} uses the term flux. This is only
-approximately accurate however, since while all the pixels have a
-fixed area, the pixel size can vary with camera on the
-telescope.}. Knowing the flux (@mymath{f}) and distance to the object
-(@mymath{r}), we can calculate its @emph{luminosity}:
-@mymath{L=4{\pi}r^2f}. Therefore, flux and luminosity are intrinsic
-properties of the object, while brightness depends on our detecting
-tools (hardware and software). Here we will not be discussing
-luminosity, but brightness. However, since luminosity is the
-astrophysically interesting quantity, we also defined it here to avoid
-possible confusion between these two terms because they both have the
-same units.
+The @emph{brightness} of an object is defined as its total detected energy per 
time.
+This is simply the sum of the pixels that are associated with that detection 
by our detection tool for example @ref{NoiseChisel}@footnote{If further 
processing is done, for example the Kron or Petrosian radii are calculated, 
then the detected area is not sufficient and the total area that was within the 
respective radius must be used.}.
+The @emph{flux} of an object is in units of energy/time/area and for a 
detected object, it is defined as its brightness divided by the area used to 
collect the light from the source or the telescope aperture (for example in 
@mymath{cm^2})@footnote{For a full object that spans over several pixels, the 
telescope area should be used to find the flux.
+However, sometimes, only the brightness per pixel is desired.
+In such cases this book also @emph{loosely} uses the term flux.
+This is only approximately accurate however, since while all the pixels have a 
fixed area, the pixel size can vary with camera on the telescope.}.
+Knowing the flux (@mymath{f}) and distance to the object (@mymath{r}), we can 
calculate its @emph{luminosity}: @mymath{L=4{\pi}r^2f}.
+Therefore, flux and luminosity are intrinsic properties of the object, while 
brightness depends on our detecting tools (hardware and software).
+Here we will not be discussing luminosity, but brightness.
+However, since luminosity is the astrophysically interesting quantity, we also 
defined it here to avoid possible confusion between these two terms because 
they both have the same units.
 
 @cindex Magnitude zero-point
 @cindex Magnitudes from flux
 @cindex Flux to magnitude conversion
 @cindex Astronomical Magnitude system
-Images of astronomical objects span over a very large range of
-brightness. With the Sun (as the brightest object) being roughly
-@mymath{2.5^{60}=10^{24}} times brighter than the faintest galaxies we
-can currently detect. Therefore discussing brightness will be very
-hard, and astronomers have chosen to use a logarithmic scale to talk
-about the brightness of astronomical objects. But the logarithm can
-only be usable with a unit-less and always positive value. Fortunately
-brightness is always positive and to remove the units we divide the
-brightness of the object (@mymath{B}) by a reference brightness
-(@mymath{B_r}). We then define the resulting logarithmic scale as
-@mymath{magnitude} through the following relation@footnote{The
-@mymath{-2.5} factor in the definition of magnitudes is a legacy of
-the our ancient colleagues and in particular Hipparchus of Nicaea
-(190-120 BC).}
+Images of astronomical objects span over a very large range of brightness.
+With the Sun (as the brightest object) being roughly @mymath{2.5^{60}=10^{24}} 
times brighter than the faintest galaxies we can currently detect.
+Therefore discussing brightness will be very hard, and astronomers have chosen 
to use a logarithmic scale to talk about the brightness of astronomical objects.
+But the logarithm can only be usable with a unit-less and always positive 
value.
+Fortunately brightness is always positive and to remove the units we divide 
the brightness of the object (@mymath{B}) by a reference brightness 
(@mymath{B_r}).
+We then define the resulting logarithmic scale as @mymath{magnitude} through 
the following relation@footnote{The @mymath{-2.5} factor in the definition of 
magnitudes is a legacy of the our ancient colleagues and in particular 
Hipparchus of Nicaea (190-120 BC).}
 
 @dispmath{m-m_r=-2.5\log_{10} \left( B \over B_r \right)}
 
 @cindex Zero-point magnitude
 @noindent
-@mymath{m} is defined as the magnitude of the object and @mymath{m_r}
-is the pre-defined magnitude of the reference brightness. One
-particularly easy condition is when @mymath{B_r=1}. This will allow us
-to summarize all the hardware specific parameters discussed above into
-one number as the reference magnitude which is commonly known as the
-Zero-point@footnote{When @mymath{B=Br=1}, the right side of the
-magnitude definition will be zero. Hence the name, ``zero-point''.}
-magnitude.
+@mymath{m} is defined as the magnitude of the object and @mymath{m_r} is the 
pre-defined magnitude of the reference brightness.
+One particularly easy condition is when @mymath{B_r=1}.
+This will allow us to summarize all the hardware specific parameters discussed 
above into one number as the reference magnitude which is commonly known as the 
Zero-point@footnote{When @mymath{B=Br=1}, the right side of the magnitude 
definition will be zero.
+Hence the name, ``zero-point''.} magnitude.
 
 
 @node Profile magnitude, Invoking astmkprof, Flux Brightness and magnitude, 
MakeProfiles
@@ -21697,35 +15857,20 @@ magnitude.
 @cindex Brightness
 @cindex Truncation radius
 @cindex Sum for total flux
-To find the profile brightness or its magnitude, (see @ref{Flux
-Brightness and magnitude}), it is customary to use the 2D integration
-of the flux to infinity. However, in MakeProfiles we do not follow
-this idealistic approach and apply a more realistic method to find the
-total brightness or magnitude: the sum of all the pixels belonging to
-a profile within its predefined truncation radius. Note that if the
-truncation radius is not large enough, this can be significantly
-different from the total integrated light to infinity.
+To find the profile brightness or its magnitude, (see @ref{Flux Brightness and 
magnitude}), it is customary to use the 2D integration of the flux to infinity.
+However, in MakeProfiles we do not follow this idealistic approach and apply a 
more realistic method to find the total brightness or magnitude: the sum of all 
the pixels belonging to a profile within its predefined truncation radius.
+Note that if the truncation radius is not large enough, this can be 
significantly different from the total integrated light to infinity.
 
 @cindex Integration to infinity
-An integration to infinity is not a realistic condition because no
-galaxy extends indefinitely (important for high S@'ersic index
-profiles), pixelation can also cause a significant difference between
-the actual total pixel sum value of the profile and that of
-integration to infinity, especially in small and high S@'ersic index
-profiles. To be safe, you can specify a large enough truncation radius
-for such compact high S@'ersic index profiles.
+An integration to infinity is not a realistic condition because no galaxy 
extends indefinitely (important for high S@'ersic index profiles), pixelation 
can also cause a significant difference between the actual total pixel sum 
value of the profile and that of integration to infinity, especially in small 
and high S@'ersic index profiles.
+To be safe, you can specify a large enough truncation radius for such compact 
high S@'ersic index profiles.
 
-If oversampling is used then the brightness is calculated using the
-over-sampled image, see @ref{Oversampling} which is much more
-accurate. The profile is first built in an array completely bounding
-it with a normalization constant of unity (see @ref{Galaxies}). Taking
-@mymath{B} to be the desired brightness and @mymath{S} to be the sum
-of the pixels in the created profile, every pixel is then multiplied
-by @mymath{B/S} so the sum is exactly @mymath{B}.
+If oversampling is used then the brightness is calculated using the 
over-sampled image, see @ref{Oversampling} which is much more accurate.
+The profile is first built in an array completely bounding it with a 
normalization constant of unity (see @ref{Galaxies}).
+Taking @mymath{B} to be the desired brightness and @mymath{S} to be the sum of 
the pixels in the created profile, every pixel is then multiplied by 
@mymath{B/S} so the sum is exactly @mymath{B}.
 
-If the @option{--individual} option is called, this same array is
-written to a FITS file. If not, only the overlapping pixels of this
-array and the output image are kept and added to the output array.
+If the @option{--individual} option is called, this same array is written to a 
FITS file.
+If not, only the overlapping pixels of this array and the output image are 
kept and added to the output array.
 
 
 
@@ -21735,9 +15880,8 @@ array and the output image are kept and added to the 
output array.
 @node Invoking astmkprof,  , Profile magnitude, MakeProfiles
 @subsection Invoking MakeProfiles
 
-MakeProfiles will make any number of profiles specified in a catalog either
-individually or in one image. The executable name is @file{astmkprof} with
-the following general template
+MakeProfiles will make any number of profiles specified in a catalog either 
individually or in one image.
+The executable name is @file{astmkprof} with the following general template
 
 @example
 $ astmkprof [OPTION ...] [Catalog]
@@ -21765,55 +15909,28 @@ $ astmkprof --individual --oversample 3 
--mergedsize=500,500 cat.txt
 @end example
 
 @noindent
-The parameters of the mock profiles can either be given through a catalog
-(which stores the parameters of many mock profiles, see @ref{MakeProfiles
-catalog}), or the @option{--kernel} option (see @ref{MakeProfiles output
-dataset}). The catalog can be in the FITS ASCII, FITS binary format, or
-plain text formats (see @ref{Tables}). A plain text catalog can also be
-provided using the Standard input (see @ref{Standard input}). The columns
-related to each parameter can be determined both by number, or by
-match/search criteria using the column names, units, or comments. with the
-options ending in @option{col}, see below.
-
-Without any file given to the @option{--background} option, MakeProfiles
-will make a zero-valued image and build the profiles on that (its size and
-main WCS parameters can also be defined through the options described in
-@ref{MakeProfiles output dataset}). Besides the main/merged image
-containing all the profiles in the catalog, it is also possible to build
-individual images for each profile (only enclosing one full profile to its
-truncation radius) with the @option{--individual} option.
-
-If an image is given to the @option{--background} option, the pixels of
-that image are used as the background value for every pixel. The flux value
-of each profile pixel will be added to the pixel in that background
-value. In this case, the values to all options relating to the output size
-and WCS will be ignored if specified (for example @option{--oversample},
-@option{--mergedsize}, and @option{--prepforconv}) on the command-line or
-in the configuration files.
-
-The sections below discuss the options specific to MakeProfiles based on
-context: the input catalog settings which can have many rows for different
-profiles are discussed in @ref{MakeProfiles catalog}, in @ref{MakeProfiles
-profile settings}, we discuss how you can set general profile settings
-(that are the same for all the profiles in the catalog). Finally
-@ref{MakeProfiles output dataset} and @ref{MakeProfiles log file} discuss
-the outputs of MakeProfiles and how you can configure them. Besides these,
-MakeProfiles also supports all the common Gnuastro program options that are
-discussed in @ref{Common options}, so please flip through them is well for
-a more comfortable usage.
-
-When building 3D profiles, there are more degrees of freedom. Hence, more
-columns are necessary and all the values related to dimensions (for example
-size of dataset in each dimension and the WCS properties) must also have 3
-values. To allow having an independent set of default values for creating
-3D profiles, MakeProfiles also installs a @file{astmkprof-3d.conf}
-configuration file (see @ref{Configuration files}). You can use this for
-default 3D profile values. For example, if you installed Gnuastro with the
-prefix @file{/usr/local} (the default location, see @ref{Installation
-directory}), you can benefit from this configuration file by running
-MakeProfiles like the example below. As with all configuration files, if
-you want to customize a given option, call it before the configuration
-file.
+The parameters of the mock profiles can either be given through a catalog 
(which stores the parameters of many mock profiles, see @ref{MakeProfiles 
catalog}), or the @option{--kernel} option (see @ref{MakeProfiles output 
dataset}).
+The catalog can be in the FITS ASCII, FITS binary format, or plain text 
formats (see @ref{Tables}).
+A plain text catalog can also be provided using the Standard input (see 
@ref{Standard input}).
+The columns related to each parameter can be determined both by number, or by 
match/search criteria using the column names, units, or comments, with the 
options ending in @option{col}, see below.
+
+Without any file given to the @option{--background} option, MakeProfiles will 
make a zero-valued image and build the profiles on that (its size and main WCS 
parameters can also be defined through the options described in 
@ref{MakeProfiles output dataset}).
+Besides the main/merged image containing all the profiles in the catalog, it 
is also possible to build individual images for each profile (only enclosing 
one full profile to its truncation radius) with the @option{--individual} 
option.
+
+If an image is given to the @option{--background} option, the pixels of that 
image are used as the background value for every pixel.
+The flux value of each profile pixel will be added to the pixel in that 
background value.
+In this case, the values to all options relating to the output size and WCS 
will be ignored if specified (for example @option{--oversample}, 
@option{--mergedsize}, and @option{--prepforconv}) on the command-line or in 
the configuration files.
+
+The sections below discuss the options specific to MakeProfiles based on 
context: the input catalog settings which can have many rows for different 
profiles are discussed in @ref{MakeProfiles catalog}, in @ref{MakeProfiles 
profile settings}, we discuss how you can set general profile settings (that 
are the same for all the profiles in the catalog).
+Finally @ref{MakeProfiles output dataset} and @ref{MakeProfiles log file} 
discuss the outputs of MakeProfiles and how you can configure them.
+Besides these, MakeProfiles also supports all the common Gnuastro program 
options that are discussed in @ref{Common options}, so please flip through them 
is well for a more comfortable usage.
+
+When building 3D profiles, there are more degrees of freedom.
+Hence, more columns are necessary and all the values related to dimensions 
(for example size of dataset in each dimension and the WCS properties) must 
also have 3 values.
+To allow having an independent set of default values for creating 3D profiles, 
MakeProfiles also installs a @file{astmkprof-3d.conf} configuration file (see 
@ref{Configuration files}).
+You can use this for default 3D profile values.
+For example, if you installed Gnuastro with the prefix @file{/usr/local} (the 
default location, see @ref{Installation directory}), you can benefit from this 
configuration file by running MakeProfiles like the example below.
+As with all configuration files, if you want to customize a given option, call 
it before the configuration file.
 
 @example
 $ astmkprof --config=/usr/local/etc/astmkprof-3d.conf catalog.txt
@@ -21823,31 +15940,20 @@ $ astmkprof --config=/usr/local/etc/astmkprof-3d.conf 
catalog.txt
 @cindex Alias, shell
 @cindex Shell startup
 @cindex Startup, shell
-To further simplify the process, you can define a shell alias in any
-startup file (for example @file{~/.bashrc}, see @ref{Installation
-directory}). Assuming that you installed Gnuastro in @file{/usr/local}, you
-can add this line to the startup file (you may put it all in one line, it
-is broken into two lines here for fitting within page limits).
+To further simplify the process, you can define a shell alias in any startup 
file (for example @file{~/.bashrc}, see @ref{Installation directory}).
+Assuming that you installed Gnuastro in @file{/usr/local}, you can add this 
line to the startup file (you may put it all in one line, it is broken into two 
lines here for fitting within page limits).
 
 @example
 alias astmkprof-3d="astmkprof --config=/usr/local/etc/astmkprof-3d.conf"
 @end example
 
 @noindent
-Using this alias, you can call MakeProfiles with the name
-@command{astmkprof-3d} (instead of @command{astmkprof}). It will
-automatically load the 3D specific configuration file first, and then parse
-any other arguments, options or configuration files. You can change the
-default values in this 3D configuration file by calling them on the
-command-line as you do with @command{astmkprof}@footnote{Recall that for
-single-invocation options, the last command-line invocation takes
-precedence over all previous invocations (including those in the 3D
-configuration file). See the description of @option{--config} in
-@ref{Operating mode options}.}.
+Using this alias, you can call MakeProfiles with the name 
@command{astmkprof-3d} (instead of @command{astmkprof}).
+It will automatically load the 3D specific configuration file first, and then 
parse any other arguments, options or configuration files.
+You can change the default values in this 3D configuration file by calling 
them on the command-line as you do with @command{astmkprof}@footnote{Recall 
that for single-invocation options, the last command-line invocation takes 
precedence over all previous invocations (including those in the 3D 
configuration file).
+See the description of @option{--config} in @ref{Operating mode options}.}.
 
-Please see @ref{Sufi simulates a detection} for a very complete tutorial
-explaining how one could use MakeProfiles in conjunction with other
-Gnuastro's programs to make a complete simulated image of a mock galaxy.
+Please see @ref{Sufi simulates a detection} for a very complete tutorial 
explaining how one could use MakeProfiles in conjunction with other Gnuastro's 
programs to make a complete simulated image of a mock galaxy.
 
 @menu
 * MakeProfiles catalog::        Required catalog properties.
@@ -21858,75 +15964,48 @@ Gnuastro's programs to make a complete simulated 
image of a mock galaxy.
 
 @node MakeProfiles catalog, MakeProfiles profile settings, Invoking astmkprof, 
Invoking astmkprof
 @subsubsection MakeProfiles catalog
-The catalog containing information about each profile can be in the FITS
-ASCII, FITS binary, or plain text formats (see @ref{Tables}). The latter
-can also be provided using standard input (see @ref{Standard input}). Its
-columns can be ordered in any desired manner. You can specify which columns
-belong to which parameters using the set of options discussed below. For
-example through the @option{--rcol} and @option{--tcol} options, you can
-specify the column that contains the radial parameter for each profile and
-its truncation respectively. See @ref{Selecting table columns} for a
-thorough discussion on the values to these options.
-
-The value for the profile center in the catalog (the @option{--ccol}
-option) can be a floating point number so the profile center can be on any
-sub-pixel position. Note that pixel positions in the FITS standard start
-from 1 and an integer is the pixel center. So a 2D image actually starts
-from the position (0.5, 0.5), which is the bottom-left corner of the first
-pixel. When a @option{--background} image with WCS information is provided
-or you specify the WCS parameters with the respective options, you may also
-use RA and Dec to identify the center of each profile (see the
-@option{--mode} option below).
-
-In MakeProfiles, profile centers do not have to be in (overlap with) the
-final image.  Even if only one pixel of the profile within the truncation
-radius overlaps with the final image size, the profile is built and
-included in the final image image. Profiles that are completely out of the
-image will not be created (unless you explicitly ask for it with the
-@option{--individual} option). You can use the output log file (created
-with @option{--log} to see which profiles were within the image, see
-@ref{Common options}.
-
-If PSF profiles (Moffat or Gaussian, see @ref{PSF}) are in the catalog and
-the profiles are to be built in one image (when @option{--individual} is
-not used), it is assumed they are the PSF(s) you want to convolve your
-created image with. So by default, they will not be built in the output
-image but as separate files. The sum of pixels of these separate files will
-also be set to unity (1) so you are ready to convolve, see @ref{Convolution
-process}. As a summary, the position and magnitude of PSF profile will be
-ignored. This behavior can be disabled with the @option{--psfinimg}
-option. If you want to create all the profiles separately (with
-@option{--individual}) and you want the sum of the PSF profile pixels to be
-unity, you have to set their magnitudes in the catalog to the zero-point
-magnitude and be sure that the central positions of the profiles don't have
-any fractional part (the PSF center has to be in the center of the pixel).
-
-The list of options directly related to the input catalog columns is shown
-below.
+The catalog containing information about each profile can be in the FITS 
ASCII, FITS binary, or plain text formats (see @ref{Tables}).
+The latter can also be provided using standard input (see @ref{Standard 
input}).
+Its columns can be ordered in any desired manner.
+You can specify which columns belong to which parameters using the set of 
options discussed below.
+For example through the @option{--rcol} and @option{--tcol} options, you can 
specify the column that contains the radial parameter for each profile and its 
truncation respectively.
+See @ref{Selecting table columns} for a thorough discussion on the values to 
these options.
+
+The value for the profile center in the catalog (the @option{--ccol} option) 
can be a floating point number so the profile center can be on any sub-pixel 
position.
+Note that pixel positions in the FITS standard start from 1 and an integer is 
the pixel center.
+So a 2D image actually starts from the position (0.5, 0.5), which is the 
bottom-left corner of the first pixel.
+When a @option{--background} image with WCS information is provided or you 
specify the WCS parameters with the respective options, you may also use RA and 
Dec to identify the center of each profile (see the @option{--mode} option 
below).
+
+In MakeProfiles, profile centers do not have to be in (overlap with) the final 
image.
+Even if only one pixel of the profile within the truncation radius overlaps 
with the final image size, the profile is built and included in the final image 
image.
+Profiles that are completely out of the image will not be created (unless you 
explicitly ask for it with the @option{--individual} option).
+You can use the output log file (created with @option{--log} to see which 
profiles were within the image, see @ref{Common options}.
+
+If PSF profiles (Moffat or Gaussian, see @ref{PSF}) are in the catalog and the 
profiles are to be built in one image (when @option{--individual} is not used), 
it is assumed they are the PSF(s) you want to convolve your created image with.
+So by default, they will not be built in the output image but as separate 
files.
+The sum of pixels of these separate files will also be set to unity (1) so you 
are ready to convolve, see @ref{Convolution process}.
+As a summary, the position and magnitude of PSF profile will be ignored.
+This behavior can be disabled with the @option{--psfinimg} option.
+If you want to create all the profiles separately (with @option{--individual}) 
and you want the sum of the PSF profile pixels to be unity, you have to set 
their magnitudes in the catalog to the zero-point magnitude and be sure that 
the central positions of the profiles don't have any fractional part (the PSF 
center has to be in the center of the pixel).
+
+The list of options directly related to the input catalog columns is shown 
below.
 
 @table @option
 
 @item --ccol=STR/INT
-Center coordinate column for each dimension. This option must be called two
-times to define the center coordinates in an image. For example
-@option{--ccol=RA} and @option{--ccol=DEC} (along with @option{--mode=wcs})
-will inform MakeProfiles to look into the catalog columns named @option{RA}
-and @option{DEC} for the Right Ascension and Declination of the profile
-centers.
+Center coordinate column for each dimension.
+This option must be called two times to define the center coordinates in an 
image.
+For example @option{--ccol=RA} and @option{--ccol=DEC} (along with 
@option{--mode=wcs}) will inform MakeProfiles to look into the catalog columns 
named @option{RA} and @option{DEC} for the Right Ascension and Declination of 
the profile centers.
 
 @item --fcol=INT/STR
-The functional form of the profile with one of the values below depending
-on the desired profile. The column can contain either the numeric codes
-(for example `@code{1}') or string characters (for example
-`@code{sersic}'). The numeric codes are easier to use in scripts which
-generate catalogs with hundreds or thousands of profiles.
-
-The string format can be easier when the catalog is to be written/checked
-by hand/eye before running MakeProfiles. It is much more readable and
-provides a level of documentation. All Gnuastro's recognized table formats
-(see @ref{Recognized table formats}) accept string type columns. To have
-string columns in a plain text table/catalog, see @ref{Gnuastro text table
-format}.
+The functional form of the profile with one of the values below depending on 
the desired profile.
+The column can contain either the numeric codes (for example `@code{1}') or 
string characters (for example `@code{sersic}').
+The numeric codes are easier to use in scripts which generate catalogs with 
hundreds or thousands of profiles.
+
+The string format can be easier when the catalog is to be written/checked by 
hand/eye before running MakeProfiles.
+It is much more readable and provides a level of documentation.
+All Gnuastro's recognized table formats (see @ref{Recognized table formats}) 
accept string type columns.
+To have string columns in a plain text table/catalog, see @ref{Gnuastro text 
table format}.
 
 @itemize
 @item
@@ -21945,221 +16024,157 @@ Point source with `@code{point}' or `@code{4}'.
 Flat profile with `@code{flat}' or `@code{5}'.
 
 @item
-Circumference profile with `@code{circum}' or `@code{6}'. A fixed value
-will be used for all pixels less than or equal to the truncation radius
-(@mymath{r_t}) and greater than @mymath{r_t-w} (@mymath{w} is the value to
-the @option{--circumwidth}).
+Circumference profile with `@code{circum}' or `@code{6}'.
+A fixed value will be used for all pixels less than or equal to the truncation 
radius (@mymath{r_t}) and greater than @mymath{r_t-w} (@mymath{w} is the value 
to the @option{--circumwidth}).
 
 @item
-@item
-Radial distance profile with `@code{distance}' or `@code{7}'. At the lowest
-level, each pixel only has an elliptical radial distance given the
-profile's shape and orientation (see @ref{Defining an ellipse and
-ellipsoid}). When this profile is chosen, the pixel's elliptical radial
-distance from the profile center is written as its value. For this profile,
-the value in the magnitude column (@option{--mcol}) will be ignored.
-
-You can use this for checks or as a first approximation to define your own
-higher-level radial function. In the latter case, just note that the
-central values are going to be incorrect (see @ref{Sampling from a
-function}).
+Radial distance profile with `@code{distance}' or `@code{7}'.
+At the lowest level, each pixel only has an elliptical radial distance given 
the profile's shape and orientation (see @ref{Defining an ellipse and 
ellipsoid}).
+When this profile is chosen, the pixel's elliptical radial distance from the 
profile center is written as its value.
+For this profile, the value in the magnitude column (@option{--mcol}) will be 
ignored.
+
+You can use this for checks or as a first approximation to define your own 
higher-level radial function.
+In the latter case, just note that the central values are going to be 
incorrect (see @ref{Sampling from a function}).
 @end itemize
 
 @item --rcol=STR/INT
-The radius parameter of the profiles. Effective radius (@mymath{r_e}) if
-S@'ersic, FWHM if Moffat or Gaussian.
+The radius parameter of the profiles.
+Effective radius (@mymath{r_e}) if S@'ersic, FWHM if Moffat or Gaussian.
 
 @item --ncol=STR/INT
 The S@'ersic index (@mymath{n}) or Moffat @mymath{\beta}.
 
 @item --pcol=STR/INT
-The position angle (in degrees) of the profiles relative to the first FITS
-axis (horizontal when viewed in SAO ds9). When building a 3D profile, this
-is the first Euler angle: first rotation of the ellipsoid major axis from
-the first FITS axis (rotating about the third axis). See @ref{Defining an
-ellipse and ellipsoid}.
+The position angle (in degrees) of the profiles relative to the first FITS 
axis (horizontal when viewed in SAO ds9).
+When building a 3D profile, this is the first Euler angle: first rotation of 
the ellipsoid major axis from the first FITS axis (rotating about the third 
axis).
+See @ref{Defining an ellipse and ellipsoid}.
 
 @item --p2col=STR/INT
-Second Euler angle (in degrees) when building a 3D ellipsoid. This is the
-second rotation of the ellipsoid major axis (following @option{--pcol})
-about the (rotated) X axis. See @ref{Defining an ellipse and
-ellipsoid}. This column is ignored when building a 2D profile.
+Second Euler angle (in degrees) when building a 3D ellipsoid.
+This is the second rotation of the ellipsoid major axis (following 
@option{--pcol}) about the (rotated) X axis.
+See @ref{Defining an ellipse and ellipsoid}.
+This column is ignored when building a 2D profile.
 
 @item --p3col=STR/INT
-Third Euler angle (in degrees) when building a 3D ellipsoid. This is the
-third rotation of the ellipsoid major axis (following @option{--pcol} and
-@option{--p2col}) about the (rotated) Z axis. See @ref{Defining an ellipse
-and ellipsoid}. This column is ignored when building a 2D profile.
+Third Euler angle (in degrees) when building a 3D ellipsoid.
+This is the third rotation of the ellipsoid major axis (following 
@option{--pcol} and @option{--p2col}) about the (rotated) Z axis.
+See @ref{Defining an ellipse and ellipsoid}.
+This column is ignored when building a 2D profile.
 
 @item --qcol=STR/INT
-The axis ratio of the profiles (minor axis divided by the major axis in a
-2D ellipse). When building a 3D ellipse, this is the ratio of the major
-axis to the semi-axis length of the second dimension (in a right-handed
-coordinate system). See @mymath{q1} in @ref{Defining an ellipse and
-ellipsoid}.
+The axis ratio of the profiles (minor axis divided by the major axis in a 2D 
ellipse).
+When building a 3D ellipse, this is the ratio of the major axis to the 
semi-axis length of the second dimension (in a right-handed coordinate system).
+See @mymath{q1} in @ref{Defining an ellipse and ellipsoid}.
 
 @item --q2col=STR/INT
-The ratio of the ellipsoid major axis to the third semi-axis length (in a
-right-handed coordinate system) of a 3D ellipsoid. See @mymath{q1} in
-@ref{Defining an ellipse and ellipsoid}. This column is ignored when
-building a 2D profile.
+The ratio of the ellipsoid major axis to the third semi-axis length (in a 
right-handed coordinate system) of a 3D ellipsoid.
+See @mymath{q1} in @ref{Defining an ellipse and ellipsoid}.
+This column is ignored when building a 2D profile.
 
 @item --mcol=STR/INT
-The total pixelated magnitude of the profile within the truncation radius,
-see @ref{Profile magnitude}.
+The total pixelated magnitude of the profile within the truncation radius, see 
@ref{Profile magnitude}.
 
 @item --tcol=STR/INT
-The truncation radius of this profile. By default it is in units of the
-radial parameter of the profile (the value in the @option{--rcol} of the
-catalog). If @option{--tunitinp} is given, this value is interpreted in
-units of pixels (prior to oversampling) irrespective of the profile.
+The truncation radius of this profile.
+By default it is in units of the radial parameter of the profile (the value in 
the @option{--rcol} of the catalog).
+If @option{--tunitinp} is given, this value is interpreted in units of pixels 
(prior to oversampling) irrespective of the profile.
 
 @end table
 
 @node MakeProfiles profile settings, MakeProfiles output dataset, MakeProfiles 
catalog, Invoking astmkprof
 @subsubsection MakeProfiles profile settings
 
-The profile parameters that differ between each created profile are
-specified through the columns in the input catalog and described in
-@ref{MakeProfiles catalog}. Besides those there are general settings for
-some profiles that don't differ between one profile and another, they are a
-property of the general process. For example how many random points to use
-in the monte-carlo integration, this value is fixed for all the
-profiles. The options described in this section are for configuring such
-properties.
+The profile parameters that differ between each created profile are specified 
through the columns in the input catalog and described in @ref{MakeProfiles 
catalog}.
+Besides those there are general settings for some profiles that don't differ 
between one profile and another, they are a property of the general process.
+For example how many random points to use in the monte-carlo integration, this 
value is fixed for all the profiles.
+The options described in this section are for configuring such properties.
 
 @table @option
 
 @item --mode=STR
-Interpret the center position columns (@option{--ccol} in @ref{MakeProfiles
-catalog}) in image or WCS coordinates. This option thus accepts only two
-values: @option{img} and @option{wcs}. It is mandatory when a catalog is
-being used as input.
+Interpret the center position columns (@option{--ccol} in @ref{MakeProfiles 
catalog}) in image or WCS coordinates.
+This option thus accepts only two values: @option{img} and @option{wcs}.
+It is mandatory when a catalog is being used as input.
 
 @item -r
 @itemx --numrandom
-The number of random points used in the central regions of the
-profile, see @ref{Sampling from a function}.
+The number of random points used in the central regions of the profile, see 
@ref{Sampling from a function}.
 
 @item -e
 @itemx --envseed
 @cindex Seed, Random number generator
 @cindex Random number generator, Seed
-Use the value to the @code{GSL_RNG_SEED} environment variable to
-generate the random Monte Carlo sampling distribution, see
-@ref{Sampling from a function} and @ref{Generating random
-numbers}.
+Use the value to the @code{GSL_RNG_SEED} environment variable to generate the 
random Monte Carlo sampling distribution, see @ref{Sampling from a function} 
and @ref{Generating random numbers}.
 
 @item -t FLT
 @itemx --tolerance=FLT
-The tolerance to switch from Monte Carlo integration to the central pixel
-value, see @ref{Sampling from a function}.
+The tolerance to switch from Monte Carlo integration to the central pixel 
value, see @ref{Sampling from a function}.
 
 @item -p
 @itemx --tunitinp
-The truncation column of the catalog is in units of pixels. By
-default, the truncation column is considered to be in units of the
-radial parameters of the profile (@option{--rcol}). Read it as
-`t-unit-in-p' for `truncation unit in pixels'.
+The truncation column of the catalog is in units of pixels.
+By default, the truncation column is considered to be in units of the radial 
parameters of the profile (@option{--rcol}).
+Read it as `t-unit-in-p' for `truncation unit in pixels'.
 
 @item -f
 @itemx --mforflatpix
-When making fixed value profiles (flat and circumference, see
-`@option{--fcol}'), don't use the value in the column specified by
-`@option{--mcol}' as the magnitude. Instead use it as the exact value that
-all the pixels of these profiles should have. This option is irrelevant for
-other types of profiles. This option is very useful for creating masks, or
-labeled regions in an image. Any integer, or floating point value can used
-in this column with this option, including @code{NaN} (or `@code{nan}', or
-`@code{NAN}', case is irrelevant), and infinities (@code{inf}, @code{-inf},
-or @code{+inf}).
-
-For example, with this option if you set the value in the magnitude column
-(@option{--mcol}) to @code{NaN}, you can create an elliptical or circular
-mask over an image (which can be given as the argument), see @ref{Blank
-pixels}. Another useful application of this option is to create labeled
-elliptical or circular apertures in an image. To do this, set the value in
-the magnitude column to the label you want for this profile. This labeled
-image can then be used in combination with NoiseChisel's output (see
-@ref{NoiseChisel output}) to do aperture photometry with MakeCatalog (see
-@ref{MakeCatalog}).
-
-Alternatively, if you want to mark regions of the image (for example with
-an elliptical circumference) and you don't want to use NaN values (as
-explained above) for some technical reason, you can get the minimum or
-maximum value in the image @footnote{The minimum will give a better result,
-because the maximum can be too high compared to most pixels in the image,
-making it harder to display.} using Arithmetic (see @ref{Arithmetic}), then
-use that value in the magnitude column along with this option for all the
-profiles.
-
-Please note that when using MakeProfiles on an already existing image, you
-have to set `@option{--oversample=1}'. Otherwise all the profiles will be
-scaled up based on the oversampling scale in your configuration files (see
-@ref{Configuration files}) unless you have accounted for oversampling in
-your catalog.
+When making fixed value profiles (flat and circumference, see 
`@option{--fcol}'), don't use the value in the column specified by 
`@option{--mcol}' as the magnitude.
+Instead use it as the exact value that all the pixels of these profiles should 
have.
+This option is irrelevant for other types of profiles.
+This option is very useful for creating masks, or labeled regions in an image.
+Any integer, or floating point value can used in this column with this option, 
including @code{NaN} (or `@code{nan}', or `@code{NAN}', case is irrelevant), 
and infinities (@code{inf}, @code{-inf}, or @code{+inf}).
+
+For example, with this option if you set the value in the magnitude column 
(@option{--mcol}) to @code{NaN}, you can create an elliptical or circular mask 
over an image (which can be given as the argument), see @ref{Blank pixels}.
+Another useful application of this option is to create labeled elliptical or 
circular apertures in an image.
+To do this, set the value in the magnitude column to the label you want for 
this profile.
+This labeled image can then be used in combination with NoiseChisel's output 
(see @ref{NoiseChisel output}) to do aperture photometry with MakeCatalog (see 
@ref{MakeCatalog}).
+
+Alternatively, if you want to mark regions of the image (for example with an 
elliptical circumference) and you don't want to use NaN values (as explained 
above) for some technical reason, you can get the minimum or maximum value in 
the image @footnote{
+The minimum will give a better result, because the maximum can be too high 
compared to most pixels in the image, making it harder to display.}
+using Arithmetic (see @ref{Arithmetic}), then use that value in the magnitude 
column along with this option for all the profiles.
+
+Please note that when using MakeProfiles on an already existing image, you 
have to set `@option{--oversample=1}'.
+Otherwise all the profiles will be scaled up based on the oversampling scale 
in your configuration files (see @ref{Configuration files}) unless you have 
accounted for oversampling in your catalog.
 
 @item --mcolisbrightness
-The value given in the ``magnitude column'' (specified by @option{--mcol},
-see @ref{MakeProfiles catalog}) must be interpreted as brightness, not
-magnitude. The zeropoint magnitude (value to the @option{--zeropoint}
-option) is ignored and the given value must have the same units as the
-input dataset's pixels.
-
-Recall that the total profile magnitude or brightness that is specified
-with in the @option{--mcol} column of the input catalog is not an
-integration to infinity, but the actual sum of pixels in the profile (until
-the desired truncation radius). See @ref{Profile magnitude} for more on
-this point.
+The value given in the ``magnitude column'' (specified by @option{--mcol}, see 
@ref{MakeProfiles catalog}) must be interpreted as brightness, not magnitude.
+The zeropoint magnitude (value to the @option{--zeropoint} option) is ignored 
and the given value must have the same units as the input dataset's pixels.
+
+Recall that the total profile magnitude or brightness that is specified with 
in the @option{--mcol} column of the input catalog is not an integration to 
infinity, but the actual sum of pixels in the profile (until the desired 
truncation radius).
+See @ref{Profile magnitude} for more on this point.
 
 @item --magatpeak
-The magnitude column in the catalog (see @ref{MakeProfiles catalog})
-will be used to find the brightness only for the peak profile pixel,
-not the full profile. Note that this is the flux of the profile's peak
-pixel in the final output of MakeProfiles. So beware of the
-oversampling, see @ref{Oversampling}.
-
-This option can be useful if you want to check a mock profile's total
-magnitude at various truncation radii. Without this option, no matter
-what the truncation radius is, the total magnitude will be the same as
-that given in the catalog. But with this option, the total magnitude
-will become brighter as you increase the truncation radius.
-
-In sharper profiles, sometimes the accuracy of measuring the peak
-profile flux is more than the overall object brightness. In such
-cases, with this option, the final profile will be built such that its
-peak has the given magnitude, not the total profile.
+The magnitude column in the catalog (see @ref{MakeProfiles catalog}) will be 
used to find the brightness only for the peak profile pixel, not the full 
profile.
+Note that this is the flux of the profile's peak pixel in the final output of 
MakeProfiles.
+So beware of the oversampling, see @ref{Oversampling}.
+
+This option can be useful if you want to check a mock profile's total 
magnitude at various truncation radii.
+Without this option, no matter what the truncation radius is, the total 
magnitude will be the same as that given in the catalog.
+But with this option, the total magnitude will become brighter as you increase 
the truncation radius.
+
+In sharper profiles, sometimes the accuracy of measuring the peak profile flux 
is more than the overall object brightness.
+In such cases, with this option, the final profile will be built such that its 
peak has the given magnitude, not the total profile.
 
 @cartouche
-@strong{CAUTION:} If you want to use this option for comparing with
-observations, please note that MakeProfiles does not do convolution. Unless
-you have de-convolved your data, your images are convolved with the
-instrument and atmospheric PSF, see @ref{PSF}. Particularly in sharper
-profiles, the flux in the peak pixel is strongly decreased after
-convolution. Also note that in such cases, besides de-convolution, you will
-have to set @option{--oversample=1} otherwise after resampling your profile
-with Warp (see @ref{Warp}), the peak flux will be different.
+@strong{CAUTION:} If you want to use this option for comparing with 
observations, please note that MakeProfiles does not do convolution.
+Unless you have de-convolved your data, your images are convolved with the 
instrument and atmospheric PSF, see @ref{PSF}.
+Particularly in sharper profiles, the flux in the peak pixel is strongly 
decreased after convolution.
+Also note that in such cases, besides de-convolution, you will have to set 
@option{--oversample=1} otherwise after resampling your profile with Warp (see 
@ref{Warp}), the peak flux will be different.
 @end cartouche
 
 @item -X INT,INT
 @itemx --shift=INT,INT
-Shift all the profiles and enlarge the image along each dimension. To
-better understand this option, please see @mymath{n} in @ref{If convolving
-afterwards}. This is useful when you want to convolve the image
-afterwards. If you are using an external PSF, be sure to oversample it to
-the same scale used for creating the mock images. If a background image is
-specified, any possible value to this option is ignored.
+Shift all the profiles and enlarge the image along each dimension.
+To better understand this option, please see @mymath{n} in @ref{If convolving 
afterwards}.
+This is useful when you want to convolve the image afterwards.
+If you are using an external PSF, be sure to oversample it to the same scale 
used for creating the mock images.
+If a background image is specified, any possible value to this option is 
ignored.
 
 @item -c
 @itemx --prepforconv
-Shift all the profiles and enlarge the image based on half the width
-of the first Moffat or Gaussian profile in the catalog, considering
-any possible oversampling see @ref{If convolving
-afterwards}. @option{--prepforconv} is only checked and possibly
-activated if @option{--xshift} and @option{--yshift} are both zero
-(after reading the command-line and configuration files). If a
-background image is specified, any possible value to this option is
-ignored.
+Shift all the profiles and enlarge the image based on half the width of the 
first Moffat or Gaussian profile in the catalog, considering any possible 
oversampling see @ref{If convolving afterwards}.
+@option{--prepforconv} is only checked and possibly activated if 
@option{--xshift} and @option{--yshift} are both zero (after reading the 
command-line and configuration files).
+If a background image is specified, any possible value to this option is 
ignored.
 
 @item -z FLT
 @itemx --zeropoint=FLT
@@ -22167,67 +16182,46 @@ The zero-point magnitude of the image.
 
 @item -w FLT
 @itemx --circumwidth=FLT
-The width of the circumference if the profile is to be an elliptical
-circumference or annulus. See the explanations for this type of profile in
-@option{--fcol}.
+The width of the circumference if the profile is to be an elliptical 
circumference or annulus.
+See the explanations for this type of profile in @option{--fcol}.
 
 @item -R
 @itemx --replace
-Do not add the pixels of each profile over the background (possibly crowded
-by other profiles), replace them. By default, when two profiles overlap,
-the final pixel value is the sum of all the profiles that overlap on that
-pixel. When this option is given, the pixels are not added but replaced by
-the newer profile's pixel and any value under it is lost.
+Do not add the pixels of each profile over the background (possibly crowded by 
other profiles), replace them.
+By default, when two profiles overlap, the final pixel value is the sum of all 
the profiles that overlap on that pixel.
+When this option is given, the pixels are not added but replaced by the newer 
profile's pixel and any value under it is lost.
 
 @cindex CPU threads
 @cindex Threads, CPU
-When order matters, make sure to use this function with
-`@option{--numthreads=1}'. When multiple threads are used, the separate
-profiles are built asynchronously and not in order. Since order does not
-matter in an addition, this causes no problems by default but has to be
-considered when this option is given. Using multiple threads is no problem
-if the profiles are to be used as a mask with a blank or fixed value (see
-`@option{--mforflatpix}') since all their pixel values are the same.
-
-Note that only non-zero pixels are replaced. With radial profiles (for
-example S@'ersic or Moffat) only values above zero will be part of the
-profile. However, when using flat profiles with the
-`@option{--mforflatpix}' option, you should be careful not to give a
-@code{0.0} value as the flat profile's pixel value.
+When order matters, make sure to use this function with 
`@option{--numthreads=1}'.
+When multiple threads are used, the separate profiles are built asynchronously 
and not in order.
+Since order does not matter in an addition, this causes no problems by default 
but has to be considered when this option is given.
+Using multiple threads is no problem if the profiles are to be used as a mask 
with a blank or fixed value (see `@option{--mforflatpix}') since all their 
pixel values are the same.
+
+Note that only non-zero pixels are replaced.
+With radial profiles (for example S@'ersic or Moffat) only values above zero 
will be part of the profile.
+However, when using flat profiles with the `@option{--mforflatpix}' option, 
you should be careful not to give a @code{0.0} value as the flat profile's 
pixel value.
 
 @end table
 
 @node MakeProfiles output dataset, MakeProfiles log file, MakeProfiles profile 
settings, Invoking astmkprof
 @subsubsection MakeProfiles output dataset
-MakeProfiles takes an input catalog uses basic properties that are defined
-there to build a dataset, for example a 2D image containing the profiles in
-the catalog. In @ref{MakeProfiles catalog} and @ref{MakeProfiles profile
-settings}, the catalog and profile settings were discussed. The options of
-this section, allow you to configure the output dataset (or the canvas that
-will host the built profiles).
+MakeProfiles takes an input catalog uses basic properties that are defined 
there to build a dataset, for example a 2D image containing the profiles in the 
catalog.
+In @ref{MakeProfiles catalog} and @ref{MakeProfiles profile settings}, the 
catalog and profile settings were discussed.
+The options of this section, allow you to configure the output dataset (or the 
canvas that will host the built profiles).
 
 @table @option
 
 @item -k STR
 @itemx --background=STR
-A background image FITS file to build the profiles on. The extension that
-contains the image should be specified with the @option{--backhdu} option,
-see below. When a background image is specified, it will be used to derive
-all the information about the output image. Hence, the following options
-will be ignored: @option{--mergedsize}, @option{--oversample},
-@option{--crpix}, @option{--crval} (generally, all other WCS related
-parameters) and the output's data type (see @option{--type} in @ref{Input
-output options}).
-
-The image will act like a canvas to build the profiles on: profile pixel
-values will be summed with the background image pixel values. With the
-@option{--replace} option you can disable this behavior and replace the
-profile pixels with the background pixels. If you want to use all the image
-information above, except for the pixel values (you want to have a blank
-canvas to build the profiles on, based on an input image), you can call
-@option{--clearcanvas}, to set all the input image's pixels to zero before
-starting to build the profiles over it (this is done in memory after
-reading the input, so nothing will happen to your input file).
+A background image FITS file to build the profiles on.
+The extension that contains the image should be specified with the 
@option{--backhdu} option, see below.
+When a background image is specified, it will be used to derive all the 
information about the output image.
+Hence, the following options will be ignored: @option{--mergedsize}, 
@option{--oversample}, @option{--crpix}, @option{--crval} (generally, all other 
WCS related parameters) and the output's data type (see @option{--type} in 
@ref{Input output options}).
+
+The image will act like a canvas to build the profiles on: profile pixel 
values will be summed with the background image pixel values.
+With the @option{--replace} option you can disable this behavior and replace 
the profile pixels with the background pixels.
+If you want to use all the image information above, except for the pixel 
values (you want to have a blank canvas to build the profiles on, based on an 
input image), you can call @option{--clearcanvas}, to set all the input image's 
pixels to zero before starting to build the profiles over it (this is done in 
memory after reading the input, so nothing will happen to your input file).
 
 @item -B STR/INT
 @itemx --backhdu=STR/INT
@@ -22235,258 +16229,176 @@ The header data unit (HDU) of the file given to 
@option{--background}.
 
 @item -C
 @itemx --clearcanvas
-When an input image is specified (with the @option{--background} option,
-set all its pixels to 0.0 immediately after reading it into
-memory. Effectively, this will allow you to use all its properties
-(described under the @option{--background} option), without having to worry
-about the pixel values.
-
-@option{--clearcanvas} can come in handy in many situations, for example if
-you want to create a labeled image (segmentation map) for creating a
-catalog (see @ref{MakeCatalog}). In other cases, you might have modeled the
-objects in an image and want to create them on the same frame, but without
-the original pixel values.
+When an input image is specified (with the @option{--background} option, set 
all its pixels to 0.0 immediately after reading it into memory.
+Effectively, this will allow you to use all its properties (described under 
the @option{--background} option), without having to worry about the pixel 
values.
+
+@option{--clearcanvas} can come in handy in many situations, for example if 
you want to create a labeled image (segmentation map) for creating a catalog 
(see @ref{MakeCatalog}).
+In other cases, you might have modeled the objects in an image and want to 
create them on the same frame, but without the original pixel values.
 
 @item -E STR/INT,FLT[,FLT,[...]]
 @itemx --kernel=STR/INT,FLT[,FLT,[...]]
-Only build one kernel profile with the parameters given as the values to
-this option. The different values must be separated by a comma
-(@key{,}). The first value identifies the radial function of the profile,
-either through a string or through a number (see description of
-@option{--fcol} in @ref{MakeProfiles catalog}). Each radial profile needs a
-different total number of parameters: S@'ersic and Moffat functions need 3
-parameters: radial, S@'ersic index or Moffat @mymath{\beta}, and truncation
-radius. The Gaussian function needs two parameters: radial and truncation
-radius. The point function doesn't need any parameters. Flat, circumference
-and distance profiles just need one parameter (truncation radius).
-
-The PSF or kernel is a unique (and highly constrained) type of profile: the
-sum of its pixels must be one, its center must be the center of the central
-pixel (in an image with an odd number of pixels on each side), and commonly
-it is circular, so its axis ratio and position angle are one and zero
-respectively. Kernels are commonly necessary for various data analysis and
-data manipulation steps (for example see @ref{Convolve}, and
-@ref{NoiseChisel}). Because of this it is inconvenient to define a catalog
-with one row and many zero valued columns (for all the non-necessary
-parameters). With this option, it is possible to easily create a kernel
-with MakeProfiles without a catalog. Here are some examples:
+Only build one kernel profile with the parameters given as the values to this 
option.
+The different values must be separated by a comma (@key{,}).
+The first value identifies the radial function of the profile, either through 
a string or through a number (see description of @option{--fcol} in 
@ref{MakeProfiles catalog}).
+Each radial profile needs a different total number of parameters: S@'ersic and 
Moffat functions need 3 parameters: radial, S@'ersic index or Moffat 
@mymath{\beta}, and truncation radius.
+The Gaussian function needs two parameters: radial and truncation radius.
+The point function doesn't need any parameters and flat and circumference 
profiles just need one parameter (truncation radius).
+
+The PSF or kernel is a unique (and highly constrained) type of profile: the 
sum of its pixels must be one, its center must be the center of the central 
pixel (in an image with an odd number of pixels on each side), and commonly it 
is circular, so its axis ratio and position angle are one and zero respectively.
+Kernels are commonly necessary for various data analysis and data manipulation 
steps (for example see @ref{Convolve}, and @ref{NoiseChisel}.
+Because of this it is inconvenient to define a catalog with one row and many 
zero valued columns (for all the non-necessary parameters).
+Hence, with this option, it is possible to create a kernel with MakeProfiles 
without the need to create a catalog.
+Here are some examples:
 
 @table @option
 @item --kernel=moffat,3,2.8,5
-A circular Moffat kernel with FWHM of 3 pixels, @mymath{\beta=2.8} which is
-truncated at 5 times the FWHM.
+A Moffat kernel with FWHM of 3 pixels, @mymath{\beta=2.8} which is truncated 
at 5 times the FWHM.
 
 @item --kernel=gaussian,2,3
 A circular Gaussian kernel with FWHM of 2 pixels and truncated at 3 times
 the FWHM.
 @end table
 
-This option may also be used to create a 3D kernel. To do that, two small
-modifications are necessary: add a @code{-3d} (or @code{-3D}) to the
-profile name (for example @code{moffat-3d}) and add a number (axis-ratio
-along the third dimension) to the end of the parameters for all profiles
-except @code{point}. The main reason behind providing an axis ratio in the
-third dimension is that in 3D astronomical datasets, commonly the third
-dimension doesn't have the same nature (units/sampling) as the first and
-second.
-
-For example in IFU datacubes, the first and second dimensions are angular
-positions (like RA and Dec) but the third is in units of Angstroms for
-wavelength. Because of this different nature (which also affects the
-processing), it may be necessary for the kernel to have a different extent
-in that direction.
-
-If the 3rd dimension axis ratio is equal to @mymath{1.0}, then the kernel
-will be a spheroid. If its smaller than @mymath{1.0}, the kernel will be
-button-shaped: extended less in the third dimension. However, when it is
-larger than @mymath{1.0}, the kernel will be bullet-shaped: extended more
-in the third dimension. In the latter case, the radial parameter will
-correspond to the length along the 3rd dimension. For example, let's have a
-look at the two examples above but in 3D:
+This option may also be used to create a 3D kernel.
+To do that, two small modifications are necessary: add a @code{-3d} (or 
@code{-3D}) to the profile name (for example @code{moffat-3d}) and add a number 
(axis-ratio along the third dimension) to the end of the parameters for all 
profiles except @code{point}.
+The main reason behind providing an axis ratio in the third dimension is that 
in 3D astronomical datasets, commonly the third dimension doesn't have the same 
nature (units/sampling) as the first and second.
+
+For example in IFU datacubes, the first and second dimensions are 
angularpositions (like RA and Dec) but the third is in units of Angstroms for 
wavelength.
+Because of this different nature (which also affects theprocessing), it may be 
necessary for the kernel to have a different extent in that direction.
+
+If the 3rd dimension axis ratio is equal to @mymath{1.0}, then the kernel will 
be a spheroid.
+If its smaller than @mymath{1.0}, the kernel will be button-shaped: extended 
less in the third dimension.
+However, when it islarger than @mymath{1.0}, the kernel will be bullet-shaped: 
extended more in the third dimension.
+In the latter case, the radial parameter will correspond to the length along 
the 3rd dimension.
+For example, let's have a look at the two examples above but in 3D:
 
 @table @option
 @item --kernel=moffat-3d,3,2.8,5,0.5
-An ellipsoid Moffat kernel with FWHM of 3 pixels, @mymath{\beta=2.8} which
-is truncated at 5 times the FWHM. The ellipsoid is circular in the first
-two dimensions, but in the third dimension its extent is half the first
-two.
+An ellipsoid Moffat kernel with FWHM of 3 pixels, @mymath{\beta=2.8} which is 
truncated at 5 times the FWHM.
+The ellipsoid is circular in the first two dimensions, but in the third 
dimension its extent is half the first two.
 
 @item --kernel=gaussian-3d,2,3,1
 A spherical Gaussian kernel with FWHM of 2 pixels and truncated at 3 times
 the FWHM.
 @end table
 
-Ofcourse, if a specific kernel is needed that doesn't fit the constraints
-imposed by this option, you can always use a catalog to define any
-arbitrary kernel. Just call the @option{--individual} and
-@option{--nomerged} options to make sure that it is built as a separate
-file (individually) and no ``merged'' image of the input profiles is
-created.
+Ofcourse, if a specific kernel is needed that doesn't fit the constraints 
imposed by this option, you can always use a catalog to define any arbitrary 
kernel.
+Just call the @option{--individual} and @option{--nomerged} options to make 
sure that it is built as a separate file (individually) and no ``merged'' image 
of the input profiles is created.
 
 @item -x INT,INT
 @itemx --mergedsize=INT,INT
-The number of pixels along each axis of the output, in FITS order. This is
-before over-sampling. For example if you call MakeProfiles with
-@option{--mergedsize=100,150 --oversample=5} (assuming no shift due for
-later convolution), then the final image size along the first axis will be
-500 by 750 pixels. Fractions are acceptable as values for each dimension,
-however, they must reduce to an integer, so
-@option{--mergedsize=150/3,300/3} is acceptable but
-@option{--mergedsize=150/4,300/4} is not.
-
-When viewing a FITS image in DS9, the first FITS dimension is in the
-horizontal direction and the second is vertical. As an example, the image
-created with the example above will have 500 pixels horizontally and 750
-pixels vertically.
+The number of pixels along each axis of the output, in FITS order.
+This is before over-sampling.
+For example if you call MakeProfiles with @option{--mergedsize=100,150 
--oversample=5} (assuming no shift due for later convolution), then the final 
image size along the first axis will be 500 by 750 pixels.
+Fractions are acceptable as values for each dimension, however, they must 
reduce to an integer, so @option{--mergedsize=150/3,300/3} is acceptable but 
@option{--mergedsize=150/4,300/4} is not.
+
+When viewing a FITS image in DS9, the first FITS dimension is in the 
horizontal direction and the second is vertical.
+As an example, the image created with the example above will have 500 pixels 
horizontally and 750 pixels vertically.
 
 If a background image is specified, this option is ignored.
 
 @item -s INT
 @itemx --oversample=INT
-The scale to over-sample the profiles and final image. If not an odd
-number, will be added by one, see @ref{Oversampling}. Note that this
-@option{--oversample} will remain active even if an input image is
-specified. If your input catalog is based on the background image, be sure
-to set @option{--oversample=1}.
+The scale to over-sample the profiles and final image.
+If not an odd number, will be added by one, see @ref{Oversampling}.
+Note that this @option{--oversample} will remain active even if an input image 
is specified.
+If your input catalog is based on the background image, be sure to set 
@option{--oversample=1}.
 
 @item --psfinimg
-Build the possibly existing PSF profiles (Moffat or Gaussian) in the
-catalog into the final image. By default they are built separately so
-you can convolve your images with them, thus their magnitude and
-positions are ignored. With this option, they will be built in the
-final image like every other galaxy profile. To have a final PSF in
-your image, make a point profile where you want the PSF and after
-convolution it will be the PSF.
+Build the possibly existing PSF profiles (Moffat or Gaussian) in the catalog 
into the final image.
+By default they are built separately so you can convolve your images with 
them, thus their magnitude and positions are ignored.
+With this option, they will be built in the final image like every other 
galaxy profile.
+To have a final PSF in your image, make a point profile where you want the PSF 
and after convolution it will be the PSF.
 
 @item -i
 @itemx --individual
 @cindex Individual profiles
 @cindex Build individual profiles
-If this option is called, each profile is created in a separate FITS
-file within the same directory as the output and the row number of the
-profile (starting from zero) in the name. The file for each row's
-profile will be in the same directory as the final combined image of
-all the profiles and will have the final image's name as a suffix. So
-for example if the final combined image is named
-@file{./out/fromcatalog.fits}, then the first profile that will be
-created with this option will be named
-@file{./out/0_fromcatalog.fits}.
-
-Since each image only has one full profile out to the truncation
-radius the profile is centered and so, only the sub-pixel position of
-the profile center is important for the outputs of this option. The
-output will have an odd number of pixels. If there is no oversampling,
-the central pixel will contain the profile center. If the value to
-@option{--oversample} is larger than unity, then the profile center is
-on any of the central @option{--oversample}'d pixels depending on the
-fractional value of the profile center.
-
-If the fractional value is larger than half, it is on the bottom half
-of the central region. This is due to the FITS definition of a real
-number position: The center of a pixel has fractional value
-@mymath{0.00} so each pixel contains these fractions: .5 -- .75 -- .00
-(pixel center) -- .25 -- .5.
+If this option is called, each profile is created in a separate FITS file 
within the same directory as the output and the row number of the profile 
(starting from zero) in the name.
+The file for each row's profile will be in the same directory as the final 
combined image of all the profiles and will have the final image's name as a 
suffix.
+So for example if the final combined image is named 
@file{./out/fromcatalog.fits}, then the first profile that will be created with 
this option will be named @file{./out/0_fromcatalog.fits}.
+
+Since each image only has one full profile out to the truncation radius the 
profile is centered and so, only the sub-pixel position of the profile center 
is important for the outputs of this option.
+The output will have an odd number of pixels.
+If there is no oversampling, the central pixel will contain the profile center.
+If the value to @option{--oversample} is larger than unity, then the profile 
center is on any of the central @option{--oversample}'d pixels depending on the 
fractional value of the profile center.
+
+If the fractional value is larger than half, it is on the bottom half of the 
central region.
+This is due to the FITS definition of a real number position: The center of a 
pixel has fractional value @mymath{0.00} so each pixel contains these 
fractions: .5 -- .75 -- .00 (pixel center) -- .25 -- .5.
 
 @item -m
 @itemx --nomerged
-Don't make a merged image. By default after making the profiles, they are
-added to a final image with side lengths specified by @option{--mergedsize}
-if they overlap with it.
+Don't make a merged image.
+By default after making the profiles, they are added to a final image with 
side lengths specified by @option{--mergedsize} if they overlap with it.
 
 @end table
 
 
 @noindent
-The options below can be used to define the world coordinate system (WCS)
-properties of the MakeProfiles outputs. The option names are deliberately
-chosen to be the same as the FITS standard WCS keywords. See Section 8 of
-@url{https://doi.org/10.1051/0004-6361/201015362, Pence et al [2010]} for a
-short introduction to WCS in the FITS standard@footnote{The world
-coordinate standard in FITS is a very beautiful and powerful concept to
-link/associate datasets with the outside world (other datasets). The
-description in the FITS standard (link above) only touches the tip of the
-ice-burg. To learn more please see
-@url{https://doi.org/10.1051/0004-6361:20021326, Greisen and Calabretta
-[2002]}, @url{https://doi.org/10.1051/0004-6361:20021327, Calabretta and
-Greisen [2002]}, @url{https://doi.org/10.1051/0004-6361:20053818, Greisen
-et al. [2006]}, and
-@url{http://www.atnf.csiro.au/people/mcalabre/WCS/dcs_20040422.pdf,
-Calabretta et al.}}.
-
-If you look into the headers of a FITS image with WCS for example you will
-see all these names but in uppercase and with numbers to represent the
-dimensions, for example @code{CRPIX1} and @code{PC2_1}. You can see the
-FITS headers with Gnuastro's @ref{Fits} program using a command like this:
-@command{$ astfits -p image.fits}.
-
-If the values given to any of these options does not correspond to the
-number of dimensions in the output dataset, then no WCS information will be
-added.
+The options below can be used to define the world coordinate system (WCS) 
properties of the MakeProfiles outputs.
+The option names are deliberately chosen to be the same as the FITS standard 
WCS keywords.
+See Section 8 of @url{https://doi.org/10.1051/0004-6361/201015362, Pence et al 
[2010]} for a short introduction to WCS in the FITS standard@footnote{The world 
coordinate standard in FITS is a very beautiful and powerful concept to 
link/associate datasets with the outside world (other datasets).
+The description in the FITS standard (link above) only touches the tip of the 
ice-burg.
+To learn more please see @url{https://doi.org/10.1051/0004-6361:20021326, 
Greisen and Calabretta [2002]}, 
@url{https://doi.org/10.1051/0004-6361:20021327, Calabretta and Greisen 
[2002]}, @url{https://doi.org/10.1051/0004-6361:20053818, Greisen et al. 
[2006]}, and 
@url{http://www.atnf.csiro.au/people/mcalabre/WCS/dcs_20040422.pdf, Calabretta 
et al.}}.
+
+If you look into the headers of a FITS image with WCS for example you will see 
all these names but in uppercase and with numbers to represent the dimensions, 
for example @code{CRPIX1} and @code{PC2_1}.
+You can see the FITS headers with Gnuastro's @ref{Fits} program using a 
command like this: @command{$ astfits -p image.fits}.
+
+If the values given to any of these options does not correspond to the number 
of dimensions in the output dataset, then no WCS information will be added.
 
 @table @option
 
 @item --crpix=FLT,FLT
-The pixel coordinates of the WCS reference point. Fractions are acceptable
-for the values of this option.
+The pixel coordinates of the WCS reference point.
+Fractions are acceptable for the values of this option.
 
 @item --crval=FLT,FLT
-The WCS coordinates of the Reference point. Fractions are acceptable for
-the values of this option.
+The WCS coordinates of the Reference point.
+Fractions are acceptable for the values of this option.
 
 @item --cdelt=FLT,FLT
-The resolution (size of one data-unit or pixel in WCS units) of the
-non-oversampled dataset. Fractions are acceptable for the values of this
-option.
+The resolution (size of one data-unit or pixel in WCS units) of the 
non-oversampled dataset.
+Fractions are acceptable for the values of this option.
 
 @item --pc=FLT,FLT,FLT,FLT
-The PC matrix of the WCS rotation, see the FITS standard (link above) to
-better understand the PC matrix.
+The PC matrix of the WCS rotation, see the FITS standard (link above) to 
better understand the PC matrix.
 
 @item --cunit=STR,STR
-The units of each WCS axis, for example @code{deg}. Note that these values
-are part of the FITS standard (link above). MakeProfiles won't complain if
-you use non-standard values, but later usage of them might cause trouble.
+The units of each WCS axis, for example @code{deg}.
+Note that these values are part of the FITS standard (link above).
+MakeProfiles won't complain if you use non-standard values, but later usage of 
them might cause trouble.
 
 @item --ctype=STR,STR
-The type of each WCS axis, for example @code{RA---TAN} and
-@code{DEC--TAN}. Note that these values are part of the FITS standard (link
-above). MakeProfiles won't complain if you use non-standard values, but
-later usage of them might cause trouble.
+The type of each WCS axis, for example @code{RA---TAN} and @code{DEC--TAN}.
+Note that these values are part of the FITS standard (link above).
+MakeProfiles won't complain if you use non-standard values, but later usage of 
them might cause trouble.
 
 @end table
 
 @node MakeProfiles log file,  , MakeProfiles output dataset, Invoking astmkprof
 @subsubsection MakeProfiles log file
 
-Besides the final merged dataset of all the profiles, or the individual
-datasets (see @ref{MakeProfiles output dataset}), if the @option{--log}
-option is called MakeProfiles will also create a log file in the current
-directory (where you run MockProfiles). See @ref{Common options} for a full
-description of @option{--log} and other options that are shared between all
-Gnuastro programs. The values for each column are explained in the first
-few commented lines of the log file (starting with @command{#}
-character). Here is a more complete description.
+Besides the final merged dataset of all the profiles, or the individual 
datasets (see @ref{MakeProfiles output dataset}), if the @option{--log} option 
is called MakeProfiles will also create a log file in the current directory 
(where you run MockProfiles).
+See @ref{Common options} for a full description of @option{--log} and other 
options that are shared between all Gnuastro programs.
+The values for each column are explained in the first few commented lines of 
the log file (starting with @command{#} character).
+Here is a more complete description.
 
 @itemize
 @item
 An ID (row number of profile in input catalog).
 
 @item
-The total magnitude of the profile in the output dataset. When the profile
-does not completely overlap with the output dataset, this will be different
-from your input magnitude.
+The total magnitude of the profile in the output dataset.
+When the profile does not completely overlap with the output dataset, this 
will be different from your input magnitude.
 
 @item
-The number of pixels (in the oversampled image) which used Monte Carlo
-integration and not the central pixel value, see @ref{Sampling from a
-function}.
+The number of pixels (in the oversampled image) which used Monte Carlo 
integration and not the central pixel value, see @ref{Sampling from a function}.
 
 @item
 The fraction of flux in the Monte Carlo integrated pixels.
 
 @item
-If an individual image was created, this column will have a value of
-@code{1}, otherwise it will have a value of @code{0}.
+If an individual image was created, this column will have a value of @code{1}, 
otherwise it will have a value of @code{0}.
 @end itemize
 
 
@@ -22504,13 +16416,9 @@ If an individual image was created, this column will 
have a value of
 @section MakeNoise
 
 @cindex Noise
-Real data are always buried in noise, therefore to finalize a
-simulation of real data (for example to test our observational
-algorithms) it is essential to add noise to the mock profiles created
-with MakeProfiles, see @ref{MakeProfiles}. Below, the general
-principles and concepts to help understand how noise is quantified is
-discussed.  MakeNoise options and argument are then discussed in
-@ref{Invoking astmknoise}.
+Real data are always buried in noise, therefore to finalize a simulation of 
real data (for example to test our observational algorithms) it is essential to 
add noise to the mock profiles created with MakeProfiles, see 
@ref{MakeProfiles}.
+Below, the general principles and concepts to help understand how noise is 
quantified is discussed.
+MakeNoise options and argument are then discussed in @ref{Invoking astmknoise}.
 
 @menu
 * Noise basics::                Noise concepts and definitions.
@@ -22524,17 +16432,11 @@ discussed.  MakeNoise options and argument are then 
discussed in
 
 @cindex Noise
 @cindex Image noise
-Deep astronomical images, like those used in extragalactic studies,
-seriously suffer from noise in the data. Generally speaking, the sources of
-noise in an astronomical image are photon counting noise and Instrumental
-noise which are discussed in @ref{Photon counting noise} and
-@ref{Instrumental noise}. This review finishes with @ref{Generating random
-numbers} which is a short introduction on how random numbers are generated.
-We will see that while software random number generators are not perfect,
-they allow us to obtain a reproducible series of random numbers through
-setting the random number generator function and seed value. Therefore in
-this section, we'll also discuss how you can set these two parameters in
-Gnuastro's programs (including MakeNoise).
+Deep astronomical images, like those used in extragalactic studies, seriously 
suffer from noise in the data.
+Generally speaking, the sources of noise in an astronomical image are photon 
counting noise and Instrumental noise which are discussed in @ref{Photon 
counting noise} and @ref{Instrumental noise}.
+This review finishes with @ref{Generating random numbers} which is a short 
introduction on how random numbers are generated.
+We will see that while software random number generators are not perfect, they 
allow us to obtain a reproducible series of random numbers through setting the 
random number generator function and seed value.
+Therefore in this section, we'll also discuss how you can set these two 
parameters in Gnuastro's programs (including MakeNoise).
 
 @menu
 * Photon counting noise::       Poisson noise
@@ -22551,98 +16453,60 @@ Gnuastro's programs (including MakeNoise).
 @cindex Poisson distribution
 @cindex Photon counting noise
 @cindex Poisson, Sim@'eon Denis
-With the very accurate electronics used in today's detectors, photon
-counting noise@footnote{In practice, we are actually counting the electrons
-that are produced by each photon, not the actual photons.} is the most
-significant source of uncertainty in most datasets. To understand this
-noise (error in counting), we need to take a closer look at how a
-distribution produced by counting can be modeled as a parametric function.
-
-Counting is an inherently discrete operation, which can only produce
-positive (including zero) integer outputs. For example we can't count
-@mymath{3.2} or @mymath{-2} of anything. We only count @mymath{0},
-@mymath{1}, @mymath{2}, @mymath{3} and so on. The distribution of values,
-as a result of counting efforts is formally known as the
-@url{https://en.wikipedia.org/wiki/Poisson_distribution, Poisson
-distribution}. It is associated to Sim@'eon Denis Poisson, because he
-discussed it while working on the number of wrongful convictions in court
-cases in his 1837 book@footnote{[From Wikipedia] Poisson's result was also
-derived in a previous study by Abraham de Moivre in 1711. Therefore some
-people suggest it should rightly be called the de Moivre distribution.}.
+With the very accurate electronics used in today's detectors, photon counting 
noise@footnote{In practice, we are actually counting the electrons that are 
produced by each photon, not the actual photons.} is the most significant 
source of uncertainty in most datasets.
+To understand this noise (error in counting), we need to take a closer look at 
how a distribution produced by counting can be modeled as a parametric function.
+
+Counting is an inherently discrete operation, which can only produce positive 
(including zero) integer outputs.
+For example we can't count @mymath{3.2} or @mymath{-2} of anything.
+We only count @mymath{0}, @mymath{1}, @mymath{2}, @mymath{3} and so on.
+The distribution of values, as a result of counting efforts is formally known 
as the @url{https://en.wikipedia.org/wiki/Poisson_distribution, Poisson 
distribution}.
+It is associated to Sim@'eon Denis Poisson, because he discussed it while 
working on the number of wrongful convictions in court cases in his 1837 
book@footnote{[From Wikipedia] Poisson's result was also derived in a previous 
study by Abraham de Moivre in 1711.
+Therefore some people suggest it should rightly be called the de Moivre 
distribution.}.
 
 @cindex Probability density function
-Let's take @mymath{\lambda} to represent the expected mean count of
-something. Furthermore, let's take @mymath{k} to represent the result of
-one particular counting attempt. The probability density function of
-getting @mymath{k} counts (in each attempt, given the expected/mean count
-of @mymath{\lambda}) can be written as:
+Let's take @mymath{\lambda} to represent the expected mean count of something.
+Furthermore, let's take @mymath{k} to represent the result of one particular 
counting attempt.
+The probability density function of getting @mymath{k} counts (in each 
attempt, given the expected/mean count of @mymath{\lambda}) can be written as:
 
 @cindex Poisson distribution
-@dispmath{f(k)={\lambda^k \over k!} e^{-\lambda},\quad k\in @{0, 1, 2,
-3, \dots @}}
+@dispmath{f(k)={\lambda^k \over k!} e^{-\lambda},\quad k\in @{0, 1, 2, 3, 
\dots @}}
 
 @cindex Skewed Poisson distribution
-Because the Poisson distribution is only applicable to positive values
-(note the factorial operator, which only applies to non-negative integers),
-naturally it is very skewed when @mymath{\lambda} is near zero. One
-qualitative way to understand this behavior is that there simply aren't
-enough integers smaller than @mymath{\lambda}, than integers that are
-larger than it. Therefore to accommodate all possibilities/counts, it has
-to be strongly skewed when @mymath{\lambda} is small.
+Because the Poisson distribution is only applicable to positive values (note 
the factorial operator, which only applies to non-negative integers), naturally 
it is very skewed when @mymath{\lambda} is near zero.
+One qualitative way to understand this behavior is that there simply aren't 
enough integers smaller than @mymath{\lambda}, than integers that are larger 
than it.
+Therefore to accommodate all possibilities/counts, it has to be strongly 
skewed when @mymath{\lambda} is small.
 
 @cindex Compare Poisson and Gaussian
-As @mymath{\lambda} becomes larger, the distribution becomes more and more
-symmetric. A very useful property of the Poisson distribution is that the
-mean value is also its variance.  When @mymath{\lambda} is very large, say
-@mymath{\lambda>1000}, then the
-@url{https://en.wikipedia.org/wiki/Normal_distribution, Normal (Gaussian)
-distribution}, is an excellent approximation of the Poisson distribution
-with mean @mymath{\mu=\lambda} and standard deviation
-@mymath{\sigma=\sqrt{\lambda}}. In other words, a Poisson distribution
-(with a sufficiently large @mymath{\lambda}) is simply a Gaussian that only
-has one free parameter (@mymath{\mu=\lambda} and
-@mymath{\sigma=\sqrt{\lambda}}), instead of the two parameters (independent
-@mymath{\mu} and @mymath{\sigma}) that it originally has.
+As @mymath{\lambda} becomes larger, the distribution becomes more and more 
symmetric.
+A very useful property of the Poisson distribution is that the mean value is 
also its variance.
+When @mymath{\lambda} is very large, say @mymath{\lambda>1000}, then the 
@url{https://en.wikipedia.org/wiki/Normal_distribution, Normal (Gaussian) 
distribution}, is an excellent approximation of the Poisson distribution with 
mean @mymath{\mu=\lambda} and standard deviation @mymath{\sigma=\sqrt{\lambda}}.
+In other words, a Poisson distribution (with a sufficiently large 
@mymath{\lambda}) is simply a Gaussian that only has one free parameter 
(@mymath{\mu=\lambda} and @mymath{\sigma=\sqrt{\lambda}}), instead of the two 
parameters (independent @mymath{\mu} and @mymath{\sigma}) that it originally 
has.
 
 @cindex Sky value
 @cindex Background flux
 @cindex Undetected objects
-In real situations, the photons/flux from our targets are added to a
-certain background flux (observationally, the @emph{Sky} value). The Sky
-value is defined to be the average flux of a region in the dataset with no
-targets. Its physical origin can be the brightness of the atmosphere (for
-ground-based instruments), possible stray light within the imaging
-instrument, the average flux of undetected targets, or etc. The Sky value
-is thus an ideal definition, because in real datasets, what lies deep in
-the noise (far lower than the detection limit) is never known@footnote{In a
-real image, a relatively large number of very faint objects can been fully
-buried in the noise and never detected. These undetected objects will bias
-the background measurement to slightly larger values. Our best
-approximation is thus to simply assume they are uniform, and consider their
-average effect. See Figure 1 (a.1 and a.2) and Section 2.2 in
-@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}.}. To
-account for all of these, the sky value is defined to be the average
-count/value of the undetected regions in the image. In a mock
-image/dataset, we have the luxury of setting the background (Sky) value.
+In real situations, the photons/flux from our targets are added to a certain 
background flux (observationally, the @emph{Sky} value).
+The Sky value is defined to be the average flux of a region in the dataset 
with no targets.
+Its physical origin can be the brightness of the atmosphere (for ground-based 
instruments), possible stray light within the imaging instrument, the average 
flux of undetected targets, etc.
+The Sky value is thus an ideal definition, because in real datasets, what lies 
deep in the noise (far lower than the detection limit) is never 
known@footnote{In a real image, a relatively large number of very faint objects 
can been fully buried in the noise and never detected.
+These undetected objects will bias the background measurement to slightly 
larger values.
+Our best approximation is thus to simply assume they are uniform, and consider 
their average effect.
+See Figure 1 (a.1 and a.2) and Section 2.2 in 
@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}.}.
+To account for all of these, the sky value is defined to be the average 
count/value of the undetected regions in the image.
+In a mock image/dataset, we have the luxury of setting the background (Sky) 
value.
 
 @cindex Simulating noise
 @cindex Noise simulation
-In each element of the dataset (pixel in an image), the flux is the sum of
-contributions from various sources (after convolution by the PSF, see
-@ref{PSF}). Let's name the convolved sum of possibly overlapping objects,
-@mymath{I_{nn}}.  @mymath{nn} representing `no noise'. For now, let's
-assume the background (@mymath{B}) is constant and sufficiently high for
-the Poisson distribution to be approximated by a Gaussian. Then the flux
-after adding noise is a random value taken from a Gaussian distribution
-with the following mean (@mymath{\mu}) and standard deviation
-(@mymath{\sigma}):
+In each element of the dataset (pixel in an image), the flux is the sum of 
contributions from various sources (after convolution by the PSF, see 
@ref{PSF}).
+Let's name the convolved sum of possibly overlapping objects, @mymath{I_{nn}}.
+@mymath{nn} representing `no noise'.
+For now, let's assume the background (@mymath{B}) is constant and sufficiently 
high for the Poisson distribution to be approximated by a Gaussian.
+Then the flux after adding noise is a random value taken from a Gaussian 
distribution with the following mean (@mymath{\mu}) and standard deviation 
(@mymath{\sigma}):
 
 @dispmath{\mu=B+I_{nn}, \quad \sigma=\sqrt{B+I_{nn}}}
 
-Since this type of noise is inherent in the objects we study, it is
-usually measured on the same scale as the astronomical objects, namely
-the magnitude system, see @ref{Flux Brightness and magnitude}. It is
-then internally converted to the flux scale for further processing.
+Since this type of noise is inherent in the objects we study, it is usually 
measured on the same scale as the astronomical objects, namely the magnitude 
system, see @ref{Flux Brightness and magnitude}.
+It is then internally converted to the flux scale for further processing.
 
 @node Instrumental noise, Final noised pixel value, Photon counting noise, 
Noise basics
 @subsubsection Instrumental noise
@@ -22650,38 +16514,27 @@ then internally converted to the flux scale for 
further processing.
 @cindex Readout noise
 @cindex Instrumental noise
 @cindex Noise, instrumental
-While taking images with a camera, a dark current is fed to the pixels, the
-variation of the value of this dark current over the pixels, also adds to
-the final image noise. Another source of noise is the readout noise that is
-produced by the electronics in the detector. Specifically, the parts that
-attempt to digitize the voltage produced by the photo-electrons in the
-analog to digital converter. With the current generation of instruments,
-this source of noise is not as significant as the noise due to the
-background Sky discussed in @ref{Photon counting noise}.
-
-Let @mymath{C} represent the combined standard deviation of all these
-instrumental sources of noise. When only this source of noise is present,
-the noised pixel value would be a random value chosen from a Gaussian
-distribution with
+While taking images with a camera, a dark current is fed to the pixels, the 
variation of the value of this dark current over the pixels, also adds to the 
final image noise.
+Another source of noise is the readout noise that is produced by the 
electronics in the detector.
+Specifically, the parts that attempt to digitize the voltage produced by the 
photo-electrons in the analog to digital converter.
+With the current generation of instruments, this source of noise is not as 
significant as the noise due to the background Sky discussed in @ref{Photon 
counting noise}.
+
+Let @mymath{C} represent the combined standard deviation of all these 
instrumental sources of noise.
+When only this source of noise is present, the noised pixel value would be a 
random value chosen from a Gaussian distribution with
 
 @dispmath{\mu=I_{nn}, \quad \sigma=\sqrt{C^2+I_{nn}}}
 
 @cindex ADU
 @cindex Gain
 @cindex Counts
-This type of noise is independent of the signal in the dataset, it is only
-determined by the instrument. So the flux scale (and not magnitude scale)
-is most commonly used for this type of noise. In practice, this value is
-usually reported in analog-to-digital units or ADUs, not flux or electron
-counts. The gain value of the device can be used to convert between these
-two, see @ref{Flux Brightness and magnitude}.
+This type of noise is independent of the signal in the dataset, it is only 
determined by the instrument.
+So the flux scale (and not magnitude scale) is most commonly used for this 
type of noise.
+In practice, this value is usually reported in analog-to-digital units or 
ADUs, not flux or electron counts.
+The gain value of the device can be used to convert between these two, see 
@ref{Flux Brightness and magnitude}.
 
 @node Final noised pixel value, Generating random numbers, Instrumental noise, 
Noise basics
 @subsubsection Final noised pixel value
-Based on the discussions in @ref{Photon counting noise} and
-@ref{Instrumental noise}, depending on the values you specify for
-@mymath{B} and @mymath{C} from the above, the final noised value for each
-pixel is a random value chosen from a Gaussian distribution with
+Based on the discussions in @ref{Photon counting noise} and @ref{Instrumental 
noise}, depending on the values you specify for @mymath{B} and @mymath{C} from 
the above, the final noised value for each pixel is a random value chosen from 
a Gaussian distribution with
 
 @dispmath{\mu=B+I_{nn}, \quad \sigma=\sqrt{C^2+B+I_{nn}}}
 
@@ -22692,72 +16545,45 @@ pixel is a random value chosen from a Gaussian 
distribution with
 
 @cindex Random numbers
 @cindex Numbers, random
-As discussed above, to generate noise we need to make random samples
-of a particular distribution. So it is important to understand some
-general concepts regarding the generation of random numbers. For a
-very complete and nice introduction we strongly advise reading Donald
-Knuth's ``The art of computer programming'', volume 2, chapter
-3@footnote{Knuth, Donald. 1998. The art of computer
-programming. Addison--Wesley. ISBN 0-201-89684-2 }. Quoting from the
-GNU Scientific Library manual, ``If you don't own it, you should stop
-reading right now, run to the nearest bookstore, and buy
-it''@footnote{For students, running to the library might be more
-affordable!}!
+As discussed above, to generate noise we need to make random samples of a 
particular distribution.
+So it is important to understand some general concepts regarding the 
generation of random numbers.
+For a very complete and nice introduction we strongly advise reading Donald 
Knuth's ``The art of computer programming'', volume 2, chapter 
3@footnote{Knuth, Donald. 1998.
+The art of computer programming. Addison--Wesley. ISBN 0-201-89684-2 }.
+Quoting from the GNU Scientific Library manual, ``If you don't own it, you 
should stop reading right now, run to the nearest bookstore, and buy 
it''@footnote{For students, running to the library might be more affordable!}!
 
 @cindex Psuedo-random numbers
 @cindex Numbers, psuedo-random
-Using only software, we can only produce what is called a psuedo-random
-sequence of numbers. A true random number generator is a hardware (let's
-assume we have made sure it has no systematic biases), for example
-throwing dice or flipping coins (which have remained from the ancient
-times). More modern hardware methods use atmospheric noise, thermal
-noise or other types of external electromagnetic or quantum
-phenomena. All pseudo-random number generators (software) require a
-seed to be the basis of the generation. The advantage of having a seed
-is that if you specify the same seed for multiple runs, you will get
-an identical sequence of random numbers which allows you to reproduce
-the same final noised image.
+Using only software, we can only produce what is called a psuedo-random 
sequence of numbers.
+A true random number generator is a hardware (let's assume we have made sure 
it has no systematic biases), for example throwing dice or flipping coins 
(which have remained from the ancient times).
+More modern hardware methods use atmospheric noise, thermal noise or other 
types of external electromagnetic or quantum phenomena.
+All pseudo-random number generators (software) require a seed to be the basis 
of the generation.
+The advantage of having a seed is that if you specify the same seed for 
multiple runs, you will get an identical sequence of random numbers which 
allows you to reproduce the same final noised image.
 
 @cindex Environment variables
 @cindex GNU Scientific Library
-The programs in GNU Astronomy Utilities (for example MakeNoise or
-MakeProfiles) use the GNU Scientific Library (GSL) to generate random
-numbers. GSL allows the user to set the random number generator
-through environment variables, see @ref{Installation directory} for an
-introduction to environment variables. In the chapter titled ``Random
-Number Generation'' they have fully explained the various random
-number generators that are available (there are a lot of
-them!). Through the two environment variables @code{GSL_RNG_TYPE} and
-@code{GSL_RNG_SEED} you can specify the generator and its seed
-respectively.
+The programs in GNU Astronomy Utilities (for example MakeNoise or 
MakeProfiles) use the GNU Scientific Library (GSL) to generate random numbers.
+GSL allows the user to set the random number generator through environment 
variables, see @ref{Installation directory} for an introduction to environment 
variables.
+In the chapter titled ``Random Number Generation'' they have fully explained 
the various random number generators that are available (there are a lot of 
them!).
+Through the two environment variables @code{GSL_RNG_TYPE} and 
@code{GSL_RNG_SEED} you can specify the generator and its seed respectively.
 
 @cindex Seed, Random number generator
 @cindex Random number generator, Seed
-If you don't specify a value for @code{GSL_RNG_TYPE}, GSL will use its
-default random number generator type. The default type is sufficient for
-most general applications. If no value is given for the @code{GSL_RNG_SEED}
-environment variable and you have asked Gnuastro to read the seed from the
-environment (through the @option{--envseed} option), then GSL will use the
-default value of each generator to give identical outputs. If you don't
-explicitly tell Gnuastro programs to read the seed value from the
-environment variable, then they will use the system time (accurate to
-within a microsecond) to generate (apparently random) seeds. In this
-manner, every time you run the program, you will get a different random
-number distribution.
-
-There are two ways you can specify values for these environment
-variables. You can call them on the same command-line for example:
+If you don't specify a value for @code{GSL_RNG_TYPE}, GSL will use its default 
random number generator type.
+The default type is sufficient for most general applications.
+If no value is given for the @code{GSL_RNG_SEED} environment variable and you 
have asked Gnuastro to read the seed from the environment (through the 
@option{--envseed} option), then GSL will use the default value of each 
generator to give identical outputs.
+If you don't explicitly tell Gnuastro programs to read the seed value from the 
environment variable, then they will use the system time (accurate to within a 
microsecond) to generate (apparently random) seeds.
+In this manner, every time you run the program, you will get a different 
random number distribution.
+
+There are two ways you can specify values for these environment variables.
+You can call them on the same command-line for example:
 
 @example
 $ GSL_RNG_TYPE="taus" GSL_RNG_SEED=345 astmknoise input.fits
 @end example
 
 @noindent
-In this manner the values will only be used for this particular execution
-of MakeNoise. Alternatively, you can define them for the full period of
-your terminal session or script length, using the shell's @command{export}
-command with the two separate commands below (for a script remove the
-@code{$} signs):
+In this manner the values will only be used for this particular execution of 
MakeNoise.
+Alternatively, you can define them for the full period of your terminal 
session or script length, using the shell's @command{export} command with the 
two separate commands below (for a script remove the @code{$} signs):
 
 @example
 $ export GSL_RNG_TYPE="taus"
@@ -22767,23 +16593,14 @@ $ export GSL_RNG_SEED=345
 @cindex Startup scripts
 @cindex @file{.bashrc}
 @noindent
-The subsequent programs which use GSL's random number generators will hence
-forth use these values in this session of the terminal you are running or
-while executing this script. In case you want to set fixed values for these
-parameters every time you use the GSL random number generator, you can add
-these two lines to your @file{.bashrc} startup script@footnote{Don't forget
-that if you are going to give your scripts (that use the GSL random number
-generator) to others you have to make sure you also tell them to set these
-environment variable separately. So for scripts, it is best to keep all
-such variable definitions within the script, even if they are within your
-@file{.bashrc}.}, see @ref{Installation directory}.
+The subsequent programs which use GSL's random number generators will hence 
forth use these values in this session of the terminal you are running or while 
executing this script.
+In case you want to set fixed values for these parameters every time you use 
the GSL random number generator, you can add these two lines to your 
@file{.bashrc} startup script@footnote{Don't forget that if you are going to 
give your scripts (that use the GSL random number generator) to others you have 
to make sure you also tell them to set these environment variable separately.
+So for scripts, it is best to keep all such variable definitions within the 
script, even if they are within your @file{.bashrc}.}, see @ref{Installation 
directory}.
 
 @cartouche
 @noindent
-@strong{NOTE:} If the two environment variables @code{GSL_RNG_TYPE}
-and @code{GSL_RNG_SEED} are defined, GSL will report them by default,
-even if you don't use the @option{--envseed} option. For example you
-can see the top few lines of the output of MakeProfiles:
+@strong{NOTE:} If the two environment variables @code{GSL_RNG_TYPE} and 
@code{GSL_RNG_SEED} are defined, GSL will report them by default, even if you 
don't use the @option{--envseed} option.
+For example you can see the top few lines of the output of MakeProfiles:
 
 @example
 $ export GSL_RNG_TYPE="taus"
@@ -22802,22 +16619,18 @@ MakeProfiles finished in 0.111271 seconds
 @noindent
 @cindex Seed, Random number generator
 @cindex Random number generator, Seed
-The first two output lines (showing the names of the environment
-variables) are printed by GSL before MakeProfiles actually starts
-generating random numbers. The Gnuastro programs will report the
-values they use independently, you should check them for the final
-values used. For example if @option{--envseed} is not given,
-@code{GSL_RNG_SEED} will not be used and the last line shown above
-will not be printed. In the case of MakeProfiles, each profile will
-get its own seed value.
+The first two output lines (showing the names of the environment variables) 
are printed by GSL before MakeProfiles actually starts generating random 
numbers.
+The Gnuastro programs will report the values they use independently, you 
should check them for the final values used.
+For example if @option{--envseed} is not given, @code{GSL_RNG_SEED} will not 
be used and the last line shown above will not be printed.
+In the case of MakeProfiles, each profile will get its own seed value.
 @end cartouche
 
 
 @node Invoking astmknoise,  , Noise basics, MakeNoise
 @subsection Invoking MakeNoise
 
-MakeNoise will add noise to an existing image. The executable name is
-@file{astmknoise} with the following general template
+MakeNoise will add noise to an existing image.
+The executable name is @file{astmknoise} with the following general template
 
 @example
 $ astmknoise [OPTION ...] InputImage.fits
@@ -22836,55 +16649,44 @@ $ astmknoise --background=-10 -z0 --instrumental=20 
mockimage.fits
 @end example
 
 @noindent
-If actual processing is to be done, the input image is a mandatory
-argument. The full list of options common to all the programs in Gnuastro
-can be seen in @ref{Common options}. The type (see @ref{Numeric data
-types}) of the output can be specified with the @option{--type} option, see
-@ref{Input output options}. The header of the output FITS file keeps all
-the parameters that were influential in making it. This is done for future
-reproducibility.
+If actual processing is to be done, the input image is a mandatory argument.
+The full list of options common to all the programs in Gnuastro can be seen in 
@ref{Common options}.
+The type (see @ref{Numeric data types}) of the output can be specified with 
the @option{--type} option, see @ref{Input output options}.
+The header of the output FITS file keeps all the parameters that were 
influential in making it.
+This is done for future reproducibility.
 
 @table @option
 
 @item -s FLT
 @item --sigma=FLT
-The total noise sigma in the same units as the pixel values. With this
-option, the @option{--background}, @option{--zeropoint} and
-@option{--instrumental} will be ignored. With this option, the noise will
-be independent of the pixel values (which is not realistic, see @ref{Photon
-counting noise}). Hence it is only useful if you are working on low surface
-brightness regions where the change in pixel value (and thus real noise) is
-insignificant.
+The total noise sigma in the same units as the pixel values.
+With this option, the @option{--background}, @option{--zeropoint} and 
@option{--instrumental} will be ignored.
+With this option, the noise will be independent of the pixel values (which is 
not realistic, see @ref{Photon counting noise}).
+Hence it is only useful if you are working on low surface brightness regions 
where the change in pixel value (and thus real noise) is insignificant.
 
 @item -b FLT
 @itemx --background=FLT
-The background pixel value for the image in units of magnitudes, see
-@ref{Photon counting noise} and @ref{Flux Brightness and magnitude}.
+The background pixel value for the image in units of magnitudes, see 
@ref{Photon counting noise} and @ref{Flux Brightness and magnitude}.
 
 @item -z FLT
 @itemx --zeropoint=FLT
-The zeropoint magnitude used to convert the value of @option{--background}
-(in units of magnitude) to flux, see @ref{Flux Brightness and magnitude}.
+The zeropoint magnitude used to convert the value of @option{--background} (in 
units of magnitude) to flux, see @ref{Flux Brightness and magnitude}.
 
 @item -i FLT
 @itemx --instrumental=FLT
-The instrumental noise which is in units of flux, see @ref{Instrumental
-noise}.
+The instrumental noise which is in units of flux, see @ref{Instrumental noise}.
 
 @item -e
 @itemx --envseed
 @cindex Seed, Random number generator
 @cindex Random number generator, Seed
-Use the @code{GSL_RNG_SEED} environment variable for the seed used in
-the random number generator, see @ref{Generating random numbers}. With
-this option, the output image noise is always going to be identical
-(or reproducible).
+Use the @code{GSL_RNG_SEED} environment variable for the seed used in the 
random number generator, see @ref{Generating random numbers}.
+With this option, the output image noise is always going to be identical (or 
reproducible).
 
 @item -d
 @itemx --doubletype
-Save the output in the double precision floating point format that was
-used internally. This option will be most useful if the input images
-were of integer types.
+Save the output in the double precision floating point format that was used 
internally.
+This option will be most useful if the input images were of integer types.
 
 @end table
 
@@ -22908,14 +16710,9 @@ were of integer types.
 @node High-level calculations, Library, Modeling and fittings, Top
 @chapter High-level calculations
 
-After the reduction of raw data (for example with the programs in @ref{Data
-manipulation}) you will have reduced images/data ready for
-processing/analyzing (for example with the programs in @ref{Data
-analysis}). But the processed/analyzed data (or catalogs) are still not
-enough to derive any scientific result. Even higher-level analysis is still
-needed to convert the observed magnitudes, sizes or volumes into physical
-quantities that we associate with each catalog entry or detected object
-which is the purpose of the tools in this section.
+After the reduction of raw data (for example with the programs in @ref{Data 
manipulation}) you will have reduced images/data ready for processing/analyzing 
(for example with the programs in @ref{Data analysis}).
+But the processed/analyzed data (or catalogs) are still not enough to derive 
any scientific result.
+Even higher-level analysis is still needed to convert the observed magnitudes, 
sizes or volumes into physical quantities that we associate with each catalog 
entry or detected object which is the purpose of the tools in this section.
 
 
 
@@ -22928,23 +16725,14 @@ which is the purpose of the tools in this section.
 @node CosmicCalculator,  , High-level calculations, High-level calculations
 @section CosmicCalculator
 
-To derive higher-level information regarding our sources in
-extra-galactic astronomy, cosmological calculations are necessary. In
-Gnuastro, CosmicCalculator is in charge of such calculations. Before
-discussing how CosmicCalculator is called and operates (in
-@ref{Invoking astcosmiccal}), it is important to provide a rough but
-mostly self sufficient review of the basics and the equations used in
-the analysis. In @ref{Distance on a 2D curved space} the basic idea of
-understanding distances in a curved and expanding 2D universe (which
-we can visualize) are reviewed. Having solidified the concepts there,
-in @ref{Extending distance concepts to 3D}, the formalism is extended
-to the 3D universe we are trying to study in our research.
-
-The focus here is obtaining a physical insight into these equations
-(mainly for the use in real observational studies). There are many
-books thoroughly deriving and proving all the equations with all
-possible initial conditions and assumptions for any abstract universe,
-interested readers can study those books.
+To derive higher-level information regarding our sources in extra-galactic 
astronomy, cosmological calculations are necessary.
+In Gnuastro, CosmicCalculator is in charge of such calculations.
+Before discussing how CosmicCalculator is called and operates (in 
@ref{Invoking astcosmiccal}), it is important to provide a rough but mostly 
self sufficient review of the basics and the equations used in the analysis.
+In @ref{Distance on a 2D curved space} the basic idea of understanding 
distances in a curved and expanding 2D universe (which we can visualize) are 
reviewed.
+Having solidified the concepts there, in @ref{Extending distance concepts to 
3D}, the formalism is extended to the 3D universe we are trying to study in our 
research.
+
+The focus here is obtaining a physical insight into these equations (mainly 
for the use in real observational studies).
+There are many books thoroughly deriving and proving all the equations with 
all possible initial conditions and assumptions for any abstract universe, 
interested readers can study those books.
 
 @menu
 * Distance on a 2D curved space::  Distances in 2D for simplicity
@@ -22955,49 +16743,29 @@ interested readers can study those books.
 @node Distance on a 2D curved space, Extending distance concepts to 3D, 
CosmicCalculator, CosmicCalculator
 @subsection Distance on a 2D curved space
 
-The observations to date (for example the Planck 2015 results), have not
-measured@footnote{The observations are interpreted under the assumption of
-uniform curvature. For a relativistic alternative to dark energy (and maybe
-also some part of dark matter), non-uniform curvature may be even be more
-critical, but that is beyond the scope of this brief explanation.} the
-presence of significant curvature in the universe. However to be generic
-(and allow its measurement if it does in fact exist), it is very important
-to create a framework that allows non-zero uniform curvature. However, this
-section is not intended to be a fully thorough and mathematically complete
-derivation of these concepts. There are many references available for such
-reviews that go deep into the abstract mathematical proofs. The emphasis
-here is on visualization of the concepts for a beginner.
-
-As 3D beings, it is difficult for us to mentally create (visualize) a
-picture of the curvature of a 3D volume. Hence, here we will assume a 2D
-surface/space and discuss distances on that 2D surface when it is flat and
-when it is curved. Once the concepts have been created/visualized here, we
-will extend them, in @ref{Extending distance concepts to 3D}, to a real 3D
-spatial @emph{slice} of the Universe we live in and hope to study.
-
-To be more understandable (actively discuss from an observer's point of
-view) let's assume there's an imaginary 2D creature living on the 2D space
-(which @emph{might} be curved in 3D). Here, we will be working with this
-creature in its efforts to analyze distances in its 2D universe. The start
-of the analysis might seem too mundane, but since it is difficult to
-imagine a 3D curved space, it is important to review all the very basic
-concepts thoroughly for an easy transition to a universe that is more
-difficult to visualize (a curved 3D space embedded in 4D).
-
-To start, let's assume a static (not expanding or shrinking), flat 2D
-surface similar to @ref{flatplane} and that the 2D creature is observing
-its universe from point @mymath{A}. One of the most basic ways to
-parameterize this space is through the Cartesian coordinates (@mymath{x},
-@mymath{y}). In @ref{flatplane}, the basic axes of these two coordinates
-are plotted. An infinitesimal change in the direction of each axis is
-written as @mymath{dx} and @mymath{dy}. For each point, the infinitesimal
-changes are parallel with the respective axes and are not shown for
-clarity. Another very useful way of parameterizing this space is through
-polar coordinates. For each point, we define a radius (@mymath{r}) and
-angle (@mymath{\phi}) from a fixed (but arbitrary) reference axis. In
-@ref{flatplane} the infinitesimal changes for each polar coordinate are
-plotted for a random point and a dashed circle is shown for all points with
-the same radius.
+The observations to date (for example the Planck 2015 results), have not 
measured@footnote{The observations are interpreted under the assumption of 
uniform curvature.
+For a relativistic alternative to dark energy (and maybe also some part of 
dark matter), non-uniform curvature may be even be more critical, but that is 
beyond the scope of this brief explanation.} the presence of significant 
curvature in the universe.
+However to be generic (and allow its measurement if it does in fact exist), it 
is very important to create a framework that allows non-zero uniform curvature.
+However, this section is not intended to be a fully thorough and 
mathematically complete derivation of these concepts.
+There are many references available for such reviews that go deep into the 
abstract mathematical proofs.
+The emphasis here is on visualization of the concepts for a beginner.
+
+As 3D beings, it is difficult for us to mentally create (visualize) a picture 
of the curvature of a 3D volume.
+Hence, here we will assume a 2D surface/space and discuss distances on that 2D 
surface when it is flat and when it is curved.
+Once the concepts have been created/visualized here, we will extend them, in 
@ref{Extending distance concepts to 3D}, to a real 3D spatial @emph{slice} of 
the Universe we live in and hope to study.
+
+To be more understandable (actively discuss from an observer's point of view) 
let's assume there's an imaginary 2D creature living on the 2D space (which 
@emph{might} be curved in 3D).
+Here, we will be working with this creature in its efforts to analyze 
distances in its 2D universe.
+The start of the analysis might seem too mundane, but since it is difficult to 
imagine a 3D curved space, it is important to review all the very basic 
concepts thoroughly for an easy transition to a universe that is more difficult 
to visualize (a curved 3D space embedded in 4D).
+
+To start, let's assume a static (not expanding or shrinking), flat 2D surface 
similar to @ref{flatplane} and that the 2D creature is observing its universe 
from point @mymath{A}.
+One of the most basic ways to parameterize this space is through the Cartesian 
coordinates (@mymath{x}, @mymath{y}).
+In @ref{flatplane}, the basic axes of these two coordinates are plotted.
+An infinitesimal change in the direction of each axis is written as 
@mymath{dx} and @mymath{dy}.
+For each point, the infinitesimal changes are parallel with the respective 
axes and are not shown for clarity.
+Another very useful way of parameterizing this space is through polar 
coordinates.
+For each point, we define a radius (@mymath{r}) and angle (@mymath{\phi}) from 
a fixed (but arbitrary) reference axis.
+In @ref{flatplane} the infinitesimal changes for each polar coordinate are 
plotted for a random point and a dashed circle is shown for all points with the 
same radius.
 
 @float Figure,flatplane
 @center@image{gnuastro-figures/flatplane, 10cm, , }
@@ -23006,122 +16774,78 @@ the same radius.
 plane.}
 @end float
 
-Assuming an object is placed at a certain position, which can be
-parameterized as @mymath{(x,y)}, or @mymath{(r,\phi)}, a general
-infinitesimal change in its position will place it in the coordinates
-@mymath{(x+dx,y+dy)} and @mymath{(r+dr,\phi+d\phi)}. The distance (on the
-flat 2D surface) that is covered by this infinitesimal change in the static
-universe (@mymath{ds_s}, the subscript signifies the static nature of this
-universe) can be written as:
+Assuming an object is placed at a certain position, which can be parameterized 
as @mymath{(x,y)}, or @mymath{(r,\phi)}, a general infinitesimal change in its 
position will place it in the coordinates @mymath{(x+dx,y+dy)} and 
@mymath{(r+dr,\phi+d\phi)}.
+The distance (on the flat 2D surface) that is covered by this infinitesimal 
change in the static universe (@mymath{ds_s}, the subscript signifies the 
static nature of this universe) can be written as:
 
 @dispmath{ds_s=dx^2+dy^2=dr^2+r^2d\phi^2}
 
-The main question is this: how can the 2D creature incorporate the
-(possible) curvature in its universe when it's calculating distances? The
-universe that it lives in might equally be a curved surface like
-@ref{sphereandplane}. The answer to this question but for a 3D being (us)
-is the whole purpose to this discussion. Here, we want to give the 2D
-creature (and later, ourselves) the tools to measure distances if the space
-(that hosts the objects) is curved.
-
-@ref{sphereandplane} assumes a spherical shell with radius @mymath{R} as
-the curved 2D plane for simplicity. The 2D plane is tangent to the
-spherical shell and only touches it at @mymath{A}. This idea will be
-generalized later. The first step in measuring the distance in a curved
-space is to imagine a third dimension along the @mymath{z} axis as shown in
-@ref{sphereandplane}. For simplicity, the @mymath{z} axis is assumed to
-pass through the center of the spherical shell. Our imaginary 2D creature
-cannot visualize the third dimension or a curved 2D surface within it, so
-the remainder of this discussion is purely abstract for it (similar to us
-having difficulty in visualizing a 3D curved space in 4D). But since we are
-3D creatures, we have the advantage of visualizing the following
-steps. Fortunately the 2D creature is already familiar with our
-mathematical constructs, so it can follow our reasoning.
-
-With the third axis added, a generic infinitesimal change over @emph{the
-full} 3D space corresponds to the distance:
+The main question is this: how can the 2D creature incorporate the (possible) 
curvature in its universe when it's calculating distances? The universe that it 
lives in might equally be a curved surface like @ref{sphereandplane}.
+The answer to this question but for a 3D being (us) is the whole purpose to 
this discussion.
+Here, we want to give the 2D creature (and later, ourselves) the tools to 
measure distances if the space (that hosts the objects) is curved.
+
+@ref{sphereandplane} assumes a spherical shell with radius @mymath{R} as the 
curved 2D plane for simplicity.
+The 2D plane is tangent to the spherical shell and only touches it at 
@mymath{A}.
+This idea will be generalized later.
+The first step in measuring the distance in a curved space is to imagine a 
third dimension along the @mymath{z} axis as shown in @ref{sphereandplane}.
+For simplicity, the @mymath{z} axis is assumed to pass through the center of 
the spherical shell.
+Our imaginary 2D creature cannot visualize the third dimension or a curved 2D 
surface within it, so the remainder of this discussion is purely abstract for 
it (similar to us having difficulty in visualizing a 3D curved space in 4D).
+But since we are 3D creatures, we have the advantage of visualizing the 
following steps.
+Fortunately the 2D creature is already familiar with our mathematical 
constructs, so it can follow our reasoning.
+
+With the third axis added, a generic infinitesimal change over @emph{the full} 
3D space corresponds to the distance:
 
 @dispmath{ds_s^2=dx^2+dy^2+dz^2=dr^2+r^2d\phi^2+dz^2.}
 
 @float Figure,sphereandplane
 @center@image{gnuastro-figures/sphereandplane, 10cm, , }
 
-@caption{2D spherical shell (centered on @mymath{O}) and flat plane (light
-gray) tangent to it at point @mymath{A}.}
+@caption{2D spherical shell (centered on @mymath{O}) and flat plane (light 
gray) tangent to it at point @mymath{A}.}
 @end float
 
-It is very important to recognize that this change of distance is for
-@emph{any} point in the 3D space, not just those changes that occur on the
-2D spherical shell of @ref{sphereandplane}. Recall that our 2D friend can
-only do measurements on the 2D surfaces, not the full 3D space. So we have
-to constrain this general change to any change on the 2D spherical
-shell. To do that, let's look at the arbitrary point @mymath{P} on the 2D
-spherical shell. Its image (@mymath{P'}) on the flat plain is also
-displayed. From the dark gray triangle, we see that
-
-@dispmath{\sin\theta={r\over R},\quad\cos\theta={R-z\over R}.}These
-relations allow the 2D creature to find the value of @mymath{z} (an
-abstract dimension for it) as a function of r (distance on a flat 2D plane,
-which it can visualize) and thus eliminate @mymath{z}. From
-@mymath{\sin^2\theta+\cos^2\theta=1}, we get @mymath{z^2-2Rz+r^2=0} and
-solving for @mymath{z}, we find:
+It is very important to recognize that this change of distance is for 
@emph{any} point in the 3D space, not just those changes that occur on the 2D 
spherical shell of @ref{sphereandplane}.
+Recall that our 2D friend can only do measurements on the 2D surfaces, not the 
full 3D space.
+So we have to constrain this general change to any change on the 2D spherical 
shell.
+To do that, let's look at the arbitrary point @mymath{P} on the 2D spherical 
shell.
+Its image (@mymath{P'}) on the flat plain is also displayed. From the dark 
gray triangle, we see that
+
+@dispmath{\sin\theta={r\over R},\quad\cos\theta={R-z\over R}.}These relations 
allow the 2D creature to find the value of @mymath{z} (an abstract dimension 
for it) as a function of r (distance on a flat 2D plane, which it can 
visualize) and thus eliminate @mymath{z}.
+From @mymath{\sin^2\theta+\cos^2\theta=1}, we get @mymath{z^2-2Rz+r^2=0} and 
solving for @mymath{z}, we find:
 
 @dispmath{z=R\left(1\pm\sqrt{1-{r^2\over R^2}}\right).}
 
-The @mymath{\pm} can be understood from @ref{sphereandplane}: For each
-@mymath{r}, there are two points on the sphere, one in the upper hemisphere
-and one in the lower hemisphere. An infinitesimal change in @mymath{r},
-will create the following infinitesimal change in @mymath{z}:
+The @mymath{\pm} can be understood from @ref{sphereandplane}: For each 
@mymath{r}, there are two points on the sphere, one in the upper hemisphere and 
one in the lower hemisphere.
+An infinitesimal change in @mymath{r}, will create the following infinitesimal 
change in @mymath{z}:
 
 @dispmath{dz={\mp r\over R}\left(1\over
-\sqrt{1-{r^2/R^2}}\right)dr.}Using the positive signed equation
-instead of @mymath{dz} in the @mymath{ds_s^2} equation above, we get:
+\sqrt{1-{r^2/R^2}}\right)dr.}Using the positive signed equation instead of 
@mymath{dz} in the @mymath{ds_s^2} equation above, we get:
 
 @dispmath{ds_s^2={dr^2\over 1-r^2/R^2}+r^2d\phi^2.}
 
-The derivation above was done for a spherical shell of radius @mymath{R} as
-a curved 2D surface. To generalize it to any surface, we can define
-@mymath{K=1/R^2} as the curvature parameter. Then the general infinitesimal
-change in a static universe can be written as:
+The derivation above was done for a spherical shell of radius @mymath{R} as a 
curved 2D surface.
+To generalize it to any surface, we can define @mymath{K=1/R^2} as the 
curvature parameter.
+Then the general infinitesimal change in a static universe can be written as:
 
 @dispmath{ds_s^2={dr^2\over 1-Kr^2}+r^2d\phi^2.}
 
-Therefore, when @mymath{K>0} (and curvature is the same everywhere), we
-have a finite universe, where @mymath{r} cannot become larger than
-@mymath{R} as in @ref{sphereandplane}. When @mymath{K=0}, we have a flat
-plane (@ref{flatplane}) and a negative @mymath{K} will correspond to an
-imaginary @mymath{R}. The latter two cases may be infinite in area (which
-is not a simple concept, but mathematically can be modeled with @mymath{r}
-extending infinitely), or finite-area (like a cylinder is flat everywhere
-with @mymath{ds_s^2={dx^2 + dy^2}}, but finite in one direction in size).
+Therefore, when @mymath{K>0} (and curvature is the same everywhere), we have a 
finite universe, where @mymath{r} cannot become larger than @mymath{R} as in 
@ref{sphereandplane}.
+When @mymath{K=0}, we have a flat plane (@ref{flatplane}) and a negative 
@mymath{K} will correspond to an imaginary @mymath{R}.
+The latter two cases may be infinite in area (which is not a simple concept, 
but mathematically can be modeled with @mymath{r} extending infinitely), or 
finite-area (like a cylinder is flat everywhere with @mymath{ds_s^2={dx^2 + 
dy^2}}, but finite in one direction in size).
 
 @cindex Proper distance
-A very important issue that can be discussed now (while we are still in 2D
-and can actually visualize things) is that @mymath{\overrightarrow{r}} is
-tangent to the curved space at the observer's position. In other words, it
-is on the gray flat surface of @ref{sphereandplane}, even when the universe
-if curved: @mymath{\overrightarrow{r}=P'-A}. Therefore for the point
-@mymath{P} on a curved space, the raw coordinate @mymath{r} is the distance
-to @mymath{P'}, not @mymath{P}. The distance to the point @mymath{P} (at a
-specific coordinate @mymath{r} on the flat plane) over the curved surface
-(thick line in @ref{sphereandplane}) is called the @emph{proper distance}
-and is displayed with @mymath{l}. For the specific example of
-@ref{sphereandplane}, the proper distance can be calculated with:
-@mymath{l=R\theta} (@mymath{\theta} is in radians). using the
-@mymath{\sin\theta} relation found above, we can find @mymath{l} as a
-function of @mymath{r}:
+A very important issue that can be discussed now (while we are still in 2D and 
can actually visualize things) is that @mymath{\overrightarrow{r}} is tangent 
to the curved space at the observer's position.
+In other words, it is on the gray flat surface of @ref{sphereandplane}, even 
when the universe if curved: @mymath{\overrightarrow{r}=P'-A}.
+Therefore for the point @mymath{P} on a curved space, the raw coordinate 
@mymath{r} is the distance to @mymath{P'}, not @mymath{P}.
+The distance to the point @mymath{P} (at a specific coordinate @mymath{r} on 
the flat plane) over the curved surface (thick line in @ref{sphereandplane}) is 
called the @emph{proper distance} and is displayed with @mymath{l}.
+For the specific example of @ref{sphereandplane}, the proper distance can be 
calculated with: @mymath{l=R\theta} (@mymath{\theta} is in radians).
+Using the @mymath{\sin\theta} relation found above, we can find @mymath{l} as 
a function of @mymath{r}:
 
 @dispmath{\theta=\sin^{-1}\left({r\over R}\right)\quad\rightarrow\quad
 l(r)=R\sin^{-1}\left({r\over R}\right)}
 
 
-@mymath{R} is just an arbitrary constant and can be directly found from
-@mymath{K}, so for cleaner equations, it is common practice to set
-@mymath{R=1}, which gives: @mymath{l(r)=\sin^{-1}r}. Also note that when
-@mymath{R=1}, then @mymath{l=\theta}. Generally, depending on the the
-curvature, in a @emph{static} universe the proper distance can be written
-as a function of the coordinate @mymath{r} as (from now on we are assuming
-@mymath{R=1}):
+@mymath{R} is just an arbitrary constant and can be directly found from 
@mymath{K}, so for cleaner equations, it is common practice to set 
@mymath{R=1}, which gives: @mymath{l(r)=\sin^{-1}r}.
+Also note that when @mymath{R=1}, then @mymath{l=\theta}.
+Generally, depending on the curvature, in a @emph{static} universe the proper 
distance can be written as a function of the coordinate @mymath{r} as (from now 
on we are assuming @mymath{R=1}):
 
 @dispmath{l(r)=\sin^{-1}(r)\quad(K>0),\quad\quad
 l(r)=r\quad(K=0),\quad\quad l(r)=\sinh^{-1}(r)\quad(K<0).}With
@@ -23131,56 +16855,32 @@ more simpler and abstract form of
 @dispmath{ds_s^2=dl^2+r^2d\phi^2.}
 
 @cindex Comoving distance
-Until now, we had assumed a static universe (not changing with time). But
-our observations so far appear to indicate that the universe is expanding
-(it isn't static). Since there is no reason to expect the observed
-expansion is unique to our particular position of the universe, we expect
-the universe to be expanding at all points with the same rate at the same
-time. Therefore, to add a time dependence to our distance measurements, we
-can include a multiplicative scaling factor, which is a function of time:
-@mymath{a(t)}. The functional form of @mymath{a(t)} comes from the
-cosmology, the physics we assume for it: general relativity, and the choice
-of whether the universe is uniform (`homogeneous') in density and curvature
-or inhomogeneous. In this section, the functional form of @mymath{a(t)} is
-irrelevant, so we can avoid these issues.
-
-With this scaling factor, the proper distance will also depend on time. As
-the universe expands, the distance between two given points will shift to
-larger values. We thus define a distance measure, or coordinate, that is
-independent of time and thus doesn't `move'. We call it the @emph{comoving
-distance} and display with @mymath{\chi} such that:
-@mymath{l(r,t)=\chi(r)a(t)}.  We have therefore, shifted the @mymath{r}
-dependence of the proper distance we derived above for a static universe to
-the comoving distance:
+Until now, we had assumed a static universe (not changing with time).
+But our observations so far appear to indicate that the universe is expanding 
(it isn't static).
+Since there is no reason to expect the observed expansion is unique to our 
particular position of the universe, we expect the universe to be expanding at 
all points with the same rate at the same time.
+Therefore, to add a time dependence to our distance measurements, we can 
include a multiplicative scaling factor, which is a function of time: 
@mymath{a(t)}.
+The functional form of @mymath{a(t)} comes from the cosmology, the physics we 
assume for it: general relativity, and the choice of whether the universe is 
uniform (`homogeneous') in density and curvature or inhomogeneous.
+In this section, the functional form of @mymath{a(t)} is irrelevant, so we can 
avoid these issues.
+
+With this scaling factor, the proper distance will also depend on time.
+As the universe expands, the distance between two given points will shift to 
larger values.
+We thus define a distance measure, or coordinate, that is independent of time 
and thus doesn't `move'.
+We call it the @emph{comoving distance} and display with @mymath{\chi} such 
that: @mymath{l(r,t)=\chi(r)a(t)}.
+We have therefore, shifted the @mymath{r} dependence of the proper distance we 
derived above for a static universe to the comoving distance:
 
 @dispmath{\chi(r)=\sin^{-1}(r)\quad(K>0),\quad\quad
 \chi(r)=r\quad(K=0),\quad\quad \chi(r)=\sinh^{-1}(r)\quad(K<0).}
 
-Therefore, @mymath{\chi(r)} is the proper distance to an object at a
-specific reference time: @mymath{t=t_r} (the @mymath{r} subscript signifies
-``reference'') when @mymath{a(t_r)=1}. At any arbitrary moment
-(@mymath{t\neq{t_r}}) before or after @mymath{t_r}, the proper distance to
-the object can be scaled with @mymath{a(t)}.
-
-Measuring the change of distance in a time-dependent (expanding) universe
-only makes sense if we can add up space and time@footnote{In other words,
-making our space-time consistent with Minkowski space-time geometry. In this
-geometry, different observers at a given point (event) in space-time split
-up space-time into `space' and `time' in different ways, just like people at
-the same spatial position can make different choices of splitting up a map
-into `left--right' and `up--down'. This model is well supported by
-twentieth and twenty-first century observations.}. But we can only add bits
-of space and time together if we measure them in the same units: with a
-conversion constant (similar to how 1000 is used to convert a kilometer
-into meters).  Experimentally, we find strong support for the hypothesis
-that this conversion constant is the speed of light (or gravitational
-waves@footnote{The speed of gravitational waves was recently found to be
-very similar to that of light in vacuum, see
-@url{https://arxiv.org/abs/1710.05834, arXiv:1710.05834}.}) in a
-vacuum. This speed is postulated to be constant@footnote{In @emph{natural
-units}, speed is measured in units of the speed of light in vacuum.} and is
-almost always written as @mymath{c}. We can thus parameterize the change in
-distance on an expanding 2D surface as
+Therefore, @mymath{\chi(r)} is the proper distance to an object at a specific 
reference time: @mymath{t=t_r} (the @mymath{r} subscript signifies 
``reference'') when @mymath{a(t_r)=1}.
+At any arbitrary moment (@mymath{t\neq{t_r}}) before or after @mymath{t_r}, 
the proper distance to the object can be scaled with @mymath{a(t)}.
+
+Measuring the change of distance in a time-dependent (expanding) universe only 
makes sense if we can add up space and time@footnote{In other words, making our 
space-time consistent with Minkowski space-time geometry.
+In this geometry, different observers at a given point (event) in space-time 
split up space-time into `space' and `time' in different ways, just like people 
at the same spatial position can make different choices of splitting up a map 
into `left--right' and `up--down'.
+This model is well supported by twentieth and twenty-first century 
observations.}.
+But we can only add bits of space and time together if we measure them in the 
same units: with a conversion constant (similar to how 1000 is used to convert 
a kilometer into meters).
+Experimentally, we find strong support for the hypothesis that this conversion 
constant is the speed of light (or gravitational waves@footnote{The speed of 
gravitational waves was recently found to be very similar to that of light in 
vacuum, see @url{https://arxiv.org/abs/1710.05834, arXiv:1710.05834}.}) in a 
vacuum.
+This speed is postulated to be constant@footnote{In @emph{natural units}, 
speed is measured in units of the speed of light in vacuum.} and is almost 
always written as @mymath{c}.
+We can thus parameterize the change in distance on an expanding 2D surface as
 
 @dispmath{ds^2=c^2dt^2-a^2(t)ds_s^2 = c^2dt^2-a^2(t)(d\chi^2+r^2d\phi^2).}
 
@@ -23188,33 +16888,25 @@ distance on an expanding 2D surface as
 @node Extending distance concepts to 3D, Invoking astcosmiccal, Distance on a 
2D curved space, CosmicCalculator
 @subsection Extending distance concepts to 3D
 
-The concepts of @ref{Distance on a 2D curved space} are here extended to a
-3D space that @emph{might} be curved. We can start with the generic
-infinitesimal distance in a static 3D universe, but this time in spherical
-coordinates instead of polar coordinates.  @mymath{\theta} is shown in
-@ref{sphereandplane}, but here we are 3D beings, positioned on @mymath{O}
-(the center of the sphere) and the point @mymath{O} is tangent to a
-4D-sphere. In our 3D space, a generic infinitesimal displacement will
-correspond to the following distance in spherical coordinates:
+The concepts of @ref{Distance on a 2D curved space} are here extended to a 3D 
space that @emph{might} be curved.
+We can start with the generic infinitesimal distance in a static 3D universe, 
but this time in spherical coordinates instead of polar coordinates.
+@mymath{\theta} is shown in @ref{sphereandplane}, but here we are 3D beings, 
positioned on @mymath{O} (the center of the sphere) and the point @mymath{O} is 
tangent to a 4D-sphere.
+In our 3D space, a generic infinitesimal displacement will correspond to the 
following distance in spherical coordinates:
 
 @dispmath{ds_s^2=dx^2+dy^2+dz^2=dr^2+r^2(d\theta^2+\sin^2{\theta}d\phi^2).}
 
-Like the 2D creature before, we now have to assume an abstract dimension
-which we cannot visualize easily. Let's call the fourth dimension
-@mymath{w}, then the general change in coordinates in the @emph{full} four
-dimensional space will be:
+Like the 2D creature before, we now have to assume an abstract dimension which 
we cannot visualize easily.
+Let's call the fourth dimension @mymath{w}, then the general change in 
coordinates in the @emph{full} four dimensional space will be:
 
 @dispmath{ds_s^2=dr^2+r^2(d\theta^2+\sin^2{\theta}d\phi^2)+dw^2.}
 
 @noindent
-But we can only work on a 3D curved space, so following exactly the same
-steps and conventions as our 2D friend, we arrive at:
+But we can only work on a 3D curved space, so following exactly the same steps 
and conventions as our 2D friend, we arrive at:
 
 @dispmath{ds_s^2={dr^2\over 1-Kr^2}+r^2(d\theta^2+\sin^2{\theta}d\phi^2).}
 
 @noindent
-In a non-static universe (with a scale factor a(t)), the distance can be
-written as:
+In a non-static universe (with a scale factor a(t)), the distance can be 
written as:
 
 @dispmath{ds^2=c^2dt^2-a^2(t)[d\chi^2+r^2(d\theta^2+\sin^2{\theta}d\phi^2)].}
 
@@ -23245,9 +16937,8 @@ written as:
 @node Invoking astcosmiccal,  , Extending distance concepts to 3D, 
CosmicCalculator
 @subsection Invoking CosmicCalculator
 
-CosmicCalculator will calculate cosmological variables based on the
-input parameters. The executable name is @file{astcosmiccal} with the
-following general template
+CosmicCalculator will calculate cosmological variables based on the input 
parameters.
+The executable name is @file{astcosmiccal} with the following general template
 
 @example
 $ astcosmiccal [OPTION...] ...
@@ -23277,21 +16968,16 @@ $ astcosmiccal -z0.4 -LAg
 $ astcosmiccal -l0.7 -m0.3 -z2.1
 @end example
 
-The input parameters (for example current matter density and etc) can be
-given as command-line options or in the configuration files, see
-@ref{Configuration files}. For a definition of the different parameters,
-please see the sections prior to this. If no redshift is given,
-CosmicCalculator will just print its input parameters and abort. For a full
-list of the input options, please see @ref{CosmicCalculator input options}.
+The input parameters (for example current matter density, etc) can be given as 
command-line options or in the configuration files, see @ref{Configuration 
files}.
+For a definition of the different parameters, please see the sections prior to 
this.
+If no redshift is given, CosmicCalculator will just print its input parameters 
and abort.
+For a full list of the input options, please see @ref{CosmicCalculator input 
options}.
 
-When only a redshift is given, CosmicCalculator will print all calculations
-(one per line) with some explanations before each. This can be good when
-you want a general feeling of the conditions at a specific
-redshift. Alternatively, if any specific calculations are requested, only
-the requested values will be calculated and printed with one character
-space between them. In this case, no description will be printed. See
-@ref{CosmicCalculator specific calculations} for the full list of these
-options along with some explanations how when/how they can be useful.
+When only a redshift is given, CosmicCalculator will print all calculations 
(one per line) with some explanations before each.
+This can be good when you want a general feeling of the conditions at a 
specific redshift.
+Alternatively, if any specific calculations are requested, only the requested 
values will be calculated and printed with one character space between them.
+In this case, no description will be printed.
+See @ref{CosmicCalculator specific calculations} for the full list of these 
options along with some explanations how when/how they can be useful.
 
 @menu
 * CosmicCalculator input options::  Options to specify input conditions.
@@ -23314,33 +17000,28 @@ Current expansion rate (in km sec@mymath{^{-1}} 
Mpc@mymath{^{-1}}).
 
 @item -l FLT
 @itemx --olambda=FLT
-Cosmological constant density divided by the critical density in the
-current Universe (@mymath{\Omega_{\Lambda,0}}).
+Cosmological constant density divided by the critical density in the current 
Universe (@mymath{\Omega_{\Lambda,0}}).
 
 @item -m FLT
 @itemx --omatter=FLT
-Matter (including massive neutrinos) density divided by the critical
-density in the current Universe (@mymath{\Omega_{m,0}}).
+Matter (including massive neutrinos) density divided by the critical density 
in the current Universe (@mymath{\Omega_{m,0}}).
 
 @item -r FLT
 @itemx --oradiation=FLT
-Radiation density divided by the critical density in the current Universe
-(@mymath{\Omega_{r,0}}).
+Radiation density divided by the critical density in the current Universe 
(@mymath{\Omega_{r,0}}).
 
 @item -O STR/FLT,FLT
 @itemx --obsline=STR/FLT,FLT
 @cindex Rest-frame wavelength
 @cindex Wavelength, rest-frame
-Find the redshift to use in next steps based on the rest-frame and observed
-wavelengths of a line. Wavelengths are assumed to be in Angstroms. The
-first argument identifies the line. It can be one of the standard names
-below, or any rest-frame wavelength in Angstroms. The second argument is
-the observed wavelength of that line. For example
-@option{--obsline=lyalpha,6000} is the same as
-@option{--obsline=1215.64,6000}.
+Find the redshift to use in next steps based on the rest-frame and observed 
wavelengths of a line.
+Wavelengths are assumed to be in Angstroms.
+The first argument identifies the line.
+It can be one of the standard names below, or any rest-frame wavelength in 
Angstroms.
+The second argument is the observed wavelength of that line.
+For example @option{--obsline=lyalpha,6000} is the same as 
@option{--obsline=1215.64,6000}.
 
-The accepted names are listed below, sorted from red (longer wavelength) to
-blue (shorter wavelength).
+The accepted names are listed below, sorted from red (longer wavelength) to 
blue (shorter wavelength).
 
 @table @code
 @item siired
@@ -23458,30 +17139,20 @@ blue (shorter wavelength).
 
 @node CosmicCalculator specific calculations,  , CosmicCalculator input 
options, Invoking astcosmiccal
 @subsubsection CosmicCalculator specific calculations
-By default, when no specific calculations are requested, CosmicCalculator
-will print a complete set of all its calculators (one line for each
-calculation, see @ref{Invoking astcosmiccal}). The full list of
-calculations can be useful when you don't want any specific value, but just
-a general view. In other contexts (for example in a batch script or during
-a discussion), you know exactly what you want and don't want to be
-distracted by all the extra information.
-
-You can use any number of the options described below in any order. When
-any of these options are requested, CosmicCalculator's output will just be
-a single line with a single space between the (possibly) multiple
-values. In the example below, only the tangential distance along one
-arcsecond (in kpc), absolute magnitude conversion, and age of the universe
-at redshift 2 are printed (recall that you can merge short options
-together, see @ref{Options}).
+By default, when no specific calculations are requested, CosmicCalculator will 
print a complete set of all its calculators (one line for each calculation, see 
@ref{Invoking astcosmiccal}).
+The full list of calculations can be useful when you don't want any specific 
value, but just a general view.
+In other contexts (for example in a batch script or during a discussion), you 
know exactly what you want and don't want to be distracted by all the extra 
information.
+
+You can use any number of the options described below in any order.
+When any of these options are requested, CosmicCalculator's output will just 
be a single line with a single space between the (possibly) multiple values.
+In the example below, only the tangential distance along one arcsecond (in 
kpc), absolute magnitude conversion, and age of the universe at redshift 2 are 
printed (recall that you can merge short options together, see @ref{Options}).
 
 @example
 $ astcosmiccal -z2 -sag
 8.585046 44.819248 3.289979
 @end example
 
-Here is one example of using this feature in scripts: by adding the
-following two lines in a script to keep/use the comoving volume with
-varying redshifts:
+Here is one example of using this feature in scripts: by adding the following 
two lines in a script to keep/use the comoving volume with varying redshifts:
 
 @example
 z=3.12
@@ -23490,75 +17161,56 @@ vol=$(astcosmiccal --redshift=$z --volume)
 
 @cindex GNU Grep
 @noindent
-In a script, this operation might be necessary for a large number of
-objects (several of galaxies in a catalog for example). So the fact that
-all the other default calculations are ignored will also help you get to
-your result faster.
-
-If you are indeed dealing with many (for example thousands) of redshifts,
-using CosmicCalculator is not the best/fastest solution. Because it has to
-go through all the configuration files and preparations for each
-invocation. To get the best efficiency (least overhead), we recommend using
-Gnuastro's cosmology library (see @ref{Cosmology
-library}). CosmicCalculator also calls the library functions defined there
-for its calculations, so you get the same result with no overhead. Gnuastro
-also has libraries for easily reading tables into a C program, see
-@ref{Table input output}. Afterwards, you can easily build and run your C
-program for the particular processing with @ref{BuildProgram}.
-
-If you just want to inspect the value of a variable visually, the
-description (which comes with units) might be more useful. In such cases,
-the following command might be better. The other calculations will also be
-done, but they are so fast that you will not notice on modern computers
-(the time it takes your eye to focus on the result is usually longer than
-the processing: a fraction of a second).
+In a script, this operation might be necessary for a large number of objects 
(several of galaxies in a catalog for example).
+So the fact that all the other default calculations are ignored will also help 
you get to your result faster.
+
+If you are indeed dealing with many (for example thousands) of redshifts, 
using CosmicCalculator is not the best/fastest solution.
+Because it has to go through all the configuration files and preparations for 
each invocation.
+To get the best efficiency (least overhead), we recommend using Gnuastro's 
cosmology library (see @ref{Cosmology library}).
+CosmicCalculator also calls the library functions defined there for its 
calculations, so you get the same result with no overhead.
+Gnuastro also has libraries for easily reading tables into a C program, see 
@ref{Table input output}.
+Afterwards, you can easily build and run your C program for the particular 
processing with @ref{BuildProgram}.
+
+If you just want to inspect the value of a variable visually, the description 
(which comes with units) might be more useful.
+In such cases, the following command might be better.
+The other calculations will also be done, but they are so fast that you will 
not notice on modern computers (the time it takes your eye to focus on the 
result is usually longer than the processing: a fraction of a second).
 
 @example
 $ astcosmiccal --redshift=0.832 | grep volume
 @end example
 
-The full list of CosmicCalculator's specific calculations is present
-below. In case you have forgot the units, you can use the @option{--help}
-option which has the units along with a short description.
+The full list of CosmicCalculator's specific calculations is present below.
+In case you have forgot the units, you can use the @option{--help} option 
which has the units along with a short description.
 
 @table @option
 
 @item -e
 @itemx --usedredshift
-The redshift that was used in this run. In many cases this is the main
-input parameter to CosmicCalculator, but it is useful in others. For
-example in combination with @option{--obsline} (where you give an observed
-and restframe wavelength and would like to know the redshift), or if you
-want to run CosmicCalculator in a loop while changing the redshift and you
-want to keep the redshift value.
+The redshift that was used in this run.
+In many cases this is the main input parameter to CosmicCalculator, but it is 
useful in others.
+For example in combination with @option{--obsline} (where you give an observed 
and rest-frame wavelength and would like to know the redshift), or if you want 
to run CosmicCalculator in a loop while changing the redshift and you want to 
keep the redshift value.
 
 @item -G
 @itemx --agenow
-The current age of the universe (given the input parameters) in Ga (Giga
-annum, or billion years).
+The current age of the universe (given the input parameters) in Ga (Giga 
annum, or billion years).
 
 @item -C
 @itemx --criticaldensitynow
-The current critical density (given the input parameters) in grams per
-centimeter-cube (@mymath{g/cm^3}).
+The current critical density (given the input parameters) in grams per 
centimeter-cube (@mymath{g/cm^3}).
 
 @item -d
 @itemx --properdistance
-The proper distance (at current time) to object at the given redshift in
-Megaparsecs (Mpc). See @ref{Distance on a 2D curved space} for a
-description of the proper distance.
+The proper distance (at current time) to object at the given redshift in 
Megaparsecs (Mpc).
+See @ref{Distance on a 2D curved space} for a description of the proper 
distance.
 
 @item -A
 @itemx --angulardimdist
-The angular diameter distance to object at given redshift in Megaparsecs
-(Mpc).
+The angular diameter distance to object at given redshift in Megaparsecs (Mpc).
 
 @item -s
 @itemx --arcsectandist
-The tangential distance covered by 1 arcseconds at the given redshift in
-kiloparsecs (Kpc). This can be useful when trying to estimate the
-resolution or pixel scale of an instrument (usually in units of arcseconds)
-at a given redshift.
+The tangential distance covered by 1 arcseconds at the given redshift in 
kiloparsecs (Kpc).
+This can be useful when trying to estimate the resolution or pixel scale of an 
instrument (usually in units of arcseconds) at a given redshift.
 
 @item -L
 @itemx --luminositydist
@@ -23570,11 +17222,9 @@ The distance modulus at given redshift.
 
 @item -a
 @itemx --absmagconv
-The conversion factor (addition) to absolute magnitude. Note that this is
-practically the distance modulus added with @mymath{-2.5\log{(1+z)}} for
-the the desired redshift based on the input parameters. Once the apparent
-magnitude and redshift of an object is known, this value may be added with
-the apparent magnitude to give the object's absolute magnitude.
+The conversion factor (addition) to absolute magnitude.
+Note that this is practically the distance modulus added with 
@mymath{-2.5\log{(1+z)}} for the desired redshift based on the input parameters.
+Once the apparent magnitude and redshift of an object is known, this value may 
be added with the apparent magnitude to give the object's absolute magnitude.
 
 @item -g
 @itemx --age
@@ -23582,20 +17232,23 @@ Age of the universe at given redshift in Ga (Giga 
annum, or billion years).
 
 @item -b
 @itemx --lookbacktime
-The look-back time to given redshift in Ga (Giga annum, or billion
-years). The look-back time at a given redshift is defined as the current
-age of the universe (@option{--agenow}) subtracted by the age of the
-universe at the given redshift.
+The look-back time to given redshift in Ga (Giga annum, or billion years).
+The look-back time at a given redshift is defined as the current age of the 
universe (@option{--agenow}) subtracted by the age of the universe at the given 
redshift.
 
 @item -c
 @itemx --criticaldensity
-The critical density at given redshift in grams per centimeter-cube
-(@mymath{g/cm^3}).
+The critical density at given redshift in grams per centimeter-cube 
(@mymath{g/cm^3}).
 
 @item -v
 @itemx --onlyvolume
-The comoving volume in Megaparsecs cube (Mpc@mymath{^3}) until the desired
-redshift based on the input parameters.
+The comoving volume in Megaparsecs cube (Mpc@mymath{^3}) until the desired 
redshift based on the input parameters.
+
+@item -i STR/FLT
+@itemx --lineatz=STR/FLT
+The wavelength of the specified line at the redshift given to CosmicCalculator.
+The line can be specified either by its name (see description of 
@option{--obsline} in @ref{CosmicCalculator input options}), or directly as a 
number.
+In the former case (when a name is given), the returned number is in units of 
Angstroms.
+In the latter (when a number is given), the returned value is the same units 
of the input number.
 
 @end table
 
@@ -23623,34 +17276,22 @@ redshift based on the input parameters.
 @node Library, Developing, High-level calculations, Top
 @chapter Library
 
-Each program in Gnuastro that was discussed in the prior chapters (or any
-program in general) is a collection of functions that is compiled into one
-executable file which can communicate directly with the outside world. The
-outside world in this context is the operating system. By communication, we
-mean that control is directly passed to a program from the operating system
-with a (possible) set of inputs and after it is finished, the program will
-pass control back to the operating system. For programs written in C and
-C++, the unique @code{main} function is in charge of this communication.
-
-Similar to a program, a library is also a collection of functions that is
-compiled into one executable file. However, unlike programs, libraries
-don't have a @code{main} function. Therefore they can't communicate
-directly with the outside world. This gives you the chance to write your
-own @code{main} function and call library functions from within it. After
-compiling your program into a binary executable, you just have to
-@emph{link} it to the library and you are ready to run (execute) your
-program. In this way, you can use Gnuastro at a much lower-level, and in
-combination with other libraries on your system, you can significantly
-boost your creativity.
-
-This chapter starts with a basic introduction to libraries and how you can
-use them in @ref{Review of library fundamentals}. The separate functions in
-the Gnuastro library are then introduced (classified by context) in
-@ref{Gnuastro library}. If you end up routinely using a fixed set of library
-functions, with a well-defined input and output, it will be much more
-beneficial if you define a program for the job. Therefore, in its
-@ref{Version controlled source}, Gnuastro comes with the @ref{The TEMPLATE
-program} to easily define your own programs(s).
+Each program in Gnuastro that was discussed in the prior chapters (or any 
program in general) is a collection of functions that is compiled into one 
executable file which can communicate directly with the outside world.
+The outside world in this context is the operating system.
+By communication, we mean that control is directly passed to a program from 
the operating system with a (possible) set of inputs and after it is finished, 
the program will pass control back to the operating system.
+For programs written in C and C++, the unique @code{main} function is in 
charge of this communication.
+
+Similar to a program, a library is also a collection of functions that is 
compiled into one executable file.
+However, unlike programs, libraries don't have a @code{main} function.
+Therefore they can't communicate directly with the outside world.
+This gives you the chance to write your own @code{main} function and call 
library functions from within it.
+After compiling your program into a binary executable, you just have to 
@emph{link} it to the library and you are ready to run (execute) your program.
+In this way, you can use Gnuastro at a much lower-level, and in combination 
with other libraries on your system, you can significantly boost your 
creativity.
+
+This chapter starts with a basic introduction to libraries and how you can use 
them in @ref{Review of library fundamentals}.
+The separate functions in the Gnuastro library are then introduced (classified 
by context) in @ref{Gnuastro library}.
+If you end up routinely using a fixed set of library functions, with a 
well-defined input and output, it will be much more beneficial if you define a 
program for the job.
+Therefore, in its @ref{Version controlled source}, Gnuastro comes with the 
@ref{The TEMPLATE program} to easily define your own programs(s).
 
 
 @menu
@@ -23663,33 +17304,20 @@ program} to easily define your own programs(s).
 @node Review of library fundamentals, BuildProgram, Library, Library
 @section Review of library fundamentals
 
-Gnuastro's libraries are written in the C programming language. In @ref{Why
-C}, we have thoroughly discussed the reasons behind this choice. C was
-actually created to write Unix, thus understanding the way C works can
-greatly help in effectively using programs and libraries in all Unix-like
-operating systems. Therefore, in the following subsections some important
-aspects of C, as it relates to libraries (and thus programs that depend on
-them) on Unix are reviewed. First we will discuss header files in
-@ref{Headers} and then go onto @ref{Linking}. This section finishes with
-@ref{Summary and example on libraries}. If you are already familiar with
-these concepts, please skip this section and go directly to @ref{Gnuastro
-library}.
+Gnuastro's libraries are written in the C programming language.
+In @ref{Why C}, we have thoroughly discussed the reasons behind this choice.
+C was actually created to write Unix, thus understanding the way C works can 
greatly help in effectively using programs and libraries in all Unix-like 
operating systems.
+Therefore, in the following subsections some important aspects of C, as it 
relates to libraries (and thus programs that depend on them) on Unix are 
reviewed.
+First we will discuss header files in @ref{Headers} and then go onto 
@ref{Linking}.
+This section finishes with @ref{Summary and example on libraries}.
+If you are already familiar with these concepts, please skip this section and 
go directly to @ref{Gnuastro library}.
 
 @cindex Modularity
-In theory, a full operating system (or any software) can be written as one
-function. Such a software would not need any headers or linking (that are
-discussed in the subsections below). However, writing that single function
-and maintaining it (adding new features, fixing bugs, documentation and
-etc) would be a programmer or scientist's worst nightmare! Furthermore, all
-the hard work that went into creating it cannot be reused in other
-software: every other programmer or scientist would have to re-invent the
-wheel. The ultimate purpose behind libraries (which come with headers and
-have to be linked) is to address this problem and increase modularity:
-``the degree to which a system's components may be separated and
-recombined'' (from Wikipedia). The more modular the source code of a
-program or library, the easier maintaining it will be, and all the hard
-work that went into creating it can be reused for a wider range of
-problems.
+In theory, a full operating system (or any software) can be written as one 
function.
+Such a software would not need any headers or linking (that are discussed in 
the subsections below).
+However, writing that single function and maintaining it (adding new features, 
fixing bugs, documentation, etc) would be a programmer or scientist's worst 
nightmare! Furthermore, all the hard work that went into creating it cannot be 
reused in other software: every other programmer or scientist would have to 
re-invent the wheel.
+The ultimate purpose behind libraries (which come with headers and have to be 
linked) is to address this problem and increase modularity: ``the degree to 
which a system's components may be separated and recombined'' (from Wikipedia).
+The more modular the source code of a program or library, the easier 
maintaining it will be, and all the hard work that went into creating it can be 
reused for a wider range of problems.
 
 @menu
 * Headers::                     Header files included in source.
@@ -23701,21 +17329,13 @@ problems.
 @subsection Headers
 
 @cindex Pre-Processor
-C source code is read from top to bottom in the source file, therefore
-program components (for example variables, data structures and functions)
-should all be @emph{defined} or @emph{declared} closer to the top of the
-source file: before they are used. @emph{Defining} something in C or C++ is
-jargon for providing its full details. @emph{Declaring} it, on the
-other-hand, is jargon for only providing the minimum information needed for
-the compiler to pass it temporarily and fill in the detailed definition
-later.
-
-For a function, the @emph{declaration} only contains the inputs and their
-data-types along with the output's type@footnote{Recall that in C,
-functions only have one output.}. The @emph{definition} adds to the
-declaration by including the exact details of what operations are done to
-the inputs to generate the output. As an example, take this simple
-summation function:
+C source code is read from top to bottom in the source file, therefore program 
components (for example variables, data structures and functions) should all be 
@emph{defined} or @emph{declared} closer to the top of the source file: before 
they are used.
+@emph{Defining} something in C or C++ is jargon for providing its full details.
+@emph{Declaring} it, on the other-hand, is jargon for only providing the 
minimum information needed for the compiler to pass it temporarily and fill in 
the detailed definition later.
+
+For a function, the @emph{declaration} only contains the inputs and their 
data-types along with the output's type@footnote{Recall that in C, functions 
only have one output.}.
+The @emph{definition} adds to the declaration by including the exact details 
of what operations are done to the inputs to generate the output.
+As an example, take this simple summation function:
 
 @example
 double
@@ -23725,13 +17345,10 @@ sum(double a, double b)
 @}
 @end example
 @noindent
-What you see above is the @emph{definition} of this function: it shows you
-(and the compiler) exactly what it does to the two @code{double} type
-inputs and that the output also has a @code{double} type. Note that a
-function's internal operations are rarely so simple and short, it can be
-arbitrarily long and complicated. This unreasonably short and simple
-function was chosen here for ease of reading. The declaration for this
-function is:
+What you see above is the @emph{definition} of this function: it shows you 
(and the compiler) exactly what it does to the two @code{double} type inputs 
and that the output also has a @code{double} type.
+Note that a function's internal operations are rarely so simple and short, it 
can be arbitrarily long and complicated.
+This unreasonably short and simple function was chosen here for ease of 
reading.
+The declaration for this function is:
 
 @example
 double
@@ -23739,160 +17356,90 @@ sum(double a, double b);
 @end example
 
 @noindent
-You can think of a function's declaration as a building's address in the
-city, and the definition as the building's complete blueprints. When the
-compiler confronts a call to a function during its processing, it doesn't
-need to know anything about how the inputs are processed to generate the
-output. Just as the postman doesn't need to know the inner structure of a
-building when delivering the mail. The declaration (address) is
-enough. Therefore by @emph{declaring} the functions once at the start of
-the source files, we don't have to worry about @emph{defining} them after
-they are used.
-
-Even for a simple real-world operation (not a simple summation like
-above!), you will soon need many functions (for example, some for
-reading/preparing the inputs, some for the processing, and some for
-preparing the output). Although it is technically possible, managing all
-the necessary functions in one file is not easy and is contrary to the
-modularity principle (see @ref{Review of library fundamentals}), for
-example the functions for preparing the input can be usable in your other
-projects with a different processing. Therefore, as we will see later (in
-@ref{Linking}), the functions don't necessarily need to be defined in the
-source file where they are used. As long as their definitions are
-ultimately linked to the final executable, everything will be fine. For
-now, it is just important to remember that the functions that are called
-within one source file must be declared within the source file
-(declarations are mandatory), but not necessarily defined there.
-
-In the spirit of modularity, it is common to define contextually similar
-functions in one source file. For example, in Gnuastro, functions that
-calculate the median, mean and other statistical functions are defined in
-@file{lib/statistics.c}, while functions that deal directly with FITS files
-are defined in @file{lib/fits.c}.
-
-Keeping the definition of similar functions in a separate file greatly
-helps their management and modularity, but this fact alone doesn't make
-things much easier for the caller's source code: recall that while
-definitions are optional, declarations are mandatory. So if this was all,
-the caller would have to manually copy and paste (@emph{include}) all the
-declarations from the various source files into the file they are working
-on now. To address this problem, programmers have adopted the header file
-convention: the header file of a source code contains all the declarations
-that a caller would need to be able to use any of its functions. For
-example, in Gnuastro, @file{lib/statistics.c} (file containing function
-definitions) comes with @file{lib/gnuastro/statistics.h} (only containing
-function declarations).
-
-The discussion above was mainly focused on functions, however, there are
-many more programming constructs such as pre-processor macros and data
-structures. Like functions, they also need to be known to the compiler when
-it confronts a call to them. So the header file also contains their
-definitions or declarations when they are necessary for the functions.
+You can think of a function's declaration as a building's address in the city, 
and the definition as the building's complete blueprints.
+When the compiler confronts a call to a function during its processing, it 
doesn't need to know anything about how the inputs are processed to generate 
the output.
+Just as the postman doesn't need to know the inner structure of a building 
when delivering the mail.
+The declaration (address) is enough.
+Therefore by @emph{declaring} the functions once at the start of the source 
files, we don't have to worry about @emph{defining} them after they are used.
+
+Even for a simple real-world operation (not a simple summation like above!), 
you will soon need many functions (for example, some for reading/preparing the 
inputs, some for the processing, and some for preparing the output).
+Although it is technically possible, managing all the necessary functions in 
one file is not easy and is contrary to the modularity principle (see 
@ref{Review of library fundamentals}), for example the functions for preparing 
the input can be usable in your other projects with a different processing.
+Therefore, as we will see later (in @ref{Linking}), the functions don't 
necessarily need to be defined in the source file where they are used.
+As long as their definitions are ultimately linked to the final executable, 
everything will be fine.
+For now, it is just important to remember that the functions that are called 
within one source file must be declared within the source file (declarations 
are mandatory), but not necessarily defined there.
+
+In the spirit of modularity, it is common to define contextually similar 
functions in one source file.
+For example, in Gnuastro, functions that calculate the median, mean and other 
statistical functions are defined in @file{lib/statistics.c}, while functions 
that deal directly with FITS files are defined in @file{lib/fits.c}.
+
+Keeping the definition of similar functions in a separate file greatly helps 
their management and modularity, but this fact alone doesn't make things much 
easier for the caller's source code: recall that while definitions are 
optional, declarations are mandatory.
+So if this was all, the caller would have to manually copy and paste 
(@emph{include}) all the declarations from the various source files into the 
file they are working on now.
+To address this problem, programmers have adopted the header file convention: 
the header file of a source code contains all the declarations that a caller 
would need to be able to use any of its functions.
+For example, in Gnuastro, @file{lib/statistics.c} (file containing function 
definitions) comes with @file{lib/gnuastro/statistics.h} (only containing 
function declarations).
+
+The discussion above was mainly focused on functions, however, there are many 
more programming constructs such as pre-processor macros and data structures.
+Like functions, they also need to be known to the compiler when it confronts a 
call to them.
+So the header file also contains their definitions or declarations when they 
are necessary for the functions.
 
 @cindex Macro
 @cindex Structures
 @cindex Data structures
 @cindex Pre-processor macros
-Pre-processor macros (or macros for short) are replaced with their defined
-value by the pre-processor before compilation. Conventionally they are
-written only in capital letters to be easily recognized. It is just
-important to understand that the compiler doesn't see the macros, it sees
-their fixed values. So when a header specifies macros you can do your
-programming without worrying about the actual values.  The standard C types
-(for example @code{int}, or @code{float}) are very low-level and basic. We
-can collect multiple C types into a @emph{structure} for a higher-level way
-to keep and pass-along data. See @ref{Generic data container} for some
-examples of macros and data structures.
-
-The contents in the header need to be @emph{include}d into the caller's
-source code with a special pre-processor command: @code{#include
-<path/to/header.h>}. As the name suggests, the @emph{pre-processor} goes
-through the source code prior to the processor (or compiler). One of its
-jobs is to include, or merge, the contents of files that are mentioned with
-this directive in the source code. Therefore the compiler sees a single
-entity containing the contents of the main file and all the included
-files. This allows you to include many (sometimes thousands of)
-declarations into your code with only one line. Since the headers are also
-installed with the library into your system, you don't even need to keep a
-copy of them for each separate program, making things even more convenient.
-
-Try opening some of the @file{.c} files in Gnuastro's @file{lib/} directory
-with a text editor to check out the include directives at the start of the
-file (after the copyright notice). Let's take @file{lib/fits.c} as an
-example. You will notice that Gnuastro's header files (like
-@file{gnuastro/fits.h}) are indeed within this directory (the @file{fits.h}
-file is in the @file{gnuastro/} directory). You will notice that files like
-@file{stdio.h}, or @file{string.h} are not in this directory (or anywhere
-within Gnuastro).
-
-On most systems the basic C header files (like @file{stdio.h} and
-@file{string.h} mentioned above) are located in
-@file{/usr/include/}@footnote{The @file{include/} directory name is taken
-from the pre-processor's @code{#include} directive, which is also the
-motivation behind the `I' in the @option{-I} option to the
-pre-processor.}. Your compiler is configured to automatically search that
-directory (and possibly others), so you don't have to explicitly mention
-these directories. Go ahead, look into the @file{/usr/include} directory
-and find @file{stdio.h} for example. When the necessary header files are
-not in those specific libraries, the pre-processor can also search in
-places other than the current directory. You can specify those directories
-with this pre-processor option@footnote{Try running Gnuastro's
-@command{make} and find the directories given to the compiler with the
-@option{-I} option.}:
+Pre-processor macros (or macros for short) are replaced with their defined 
value by the pre-processor before compilation.
+Conventionally they are written only in capital letters to be easily 
recognized.
+It is just important to understand that the compiler doesn't see the macros, 
it sees their fixed values.
+So when a header specifies macros you can do your programming without worrying 
about the actual values.
+The standard C types (for example @code{int}, or @code{float}) are very 
low-level and basic.
+We can collect multiple C types into a @emph{structure} for a higher-level way 
to keep and pass-along data.
+See @ref{Generic data container} for some examples of macros and data 
structures.
+
+The contents in the header need to be @emph{include}d into the caller's source 
code with a special pre-processor command: @code{#include <path/to/header.h>}.
+As the name suggests, the @emph{pre-processor} goes through the source code 
prior to the processor (or compiler).
+One of its jobs is to include, or merge, the contents of files that are 
mentioned with this directive in the source code.
+Therefore the compiler sees a single entity containing the contents of the 
main file and all the included files.
+This allows you to include many (sometimes thousands of) declarations into 
your code with only one line.
+Since the headers are also installed with the library into your system, you 
don't even need to keep a copy of them for each separate program, making things 
even more convenient.
+
+Try opening some of the @file{.c} files in Gnuastro's @file{lib/} directory 
with a text editor to check out the include directives at the start of the file 
(after the copyright notice).
+Let's take @file{lib/fits.c} as an example.
+You will notice that Gnuastro's header files (like @file{gnuastro/fits.h}) are 
indeed within this directory (the @file{fits.h} file is in the @file{gnuastro/} 
directory).
+You will notice that files like @file{stdio.h}, or @file{string.h} are not in 
this directory (or anywhere within Gnuastro).
+
+On most systems the basic C header files (like @file{stdio.h} and 
@file{string.h} mentioned above) are located in 
@file{/usr/include/}@footnote{The @file{include/} directory name is taken from 
the pre-processor's @code{#include} directive, which is also the motivation 
behind the `I' in the @option{-I} option to the pre-processor.}.
+Your compiler is configured to automatically search that directory (and 
possibly others), so you don't have to explicitly mention these directories.
+Go ahead, look into the @file{/usr/include} directory and find @file{stdio.h} 
for example.
+When the necessary header files are not in those specific libraries, the 
pre-processor can also search in places other than the current directory.
+You can specify those directories with this pre-processor option@footnote{Try 
running Gnuastro's @command{make} and find the directories given to the 
compiler with the @option{-I} option.}:
 
 @table @option
 @item -I DIR
-``Add the directory @file{DIR} to the list of directories to be searched
-for header files.  Directories named by '-I' are searched before the
-standard system include directories.  If the directory @file{DIR} is a
-standard system include directory, the option is ignored to ensure that the
-default search order for system directories and the special treatment of
-system headers are not defeated...'' (quoted from the GNU Compiler
-Collection manual). Note that the space between @key{I} and the directory
-is optional and commonly not used.
+``Add the directory @file{DIR} to the list of directories to be searched for 
header files.
+Directories named by '-I' are searched before the standard system include 
directories.
+If the directory @file{DIR} is a standard system include directory, the option 
is ignored to ensure that the default search order for system directories and 
the special treatment of system headers are not defeated...'' (quoted from the 
GNU Compiler Collection manual).
+Note that the space between @key{I} and the directory is optional and commonly 
not used.
 @end table
 
-If the pre-processor can't find the included files, it will abort with an
-error. In fact a common error when building programs that depend on a
-library is that the compiler doesn't not know where a library's header is
-(see @ref{Known issues}). So you have to manually tell the compiler where
-to look for the library's headers with the @option{-I} option. For a small
-software with one or two source files, this can be done manually (see
-@ref{Summary and example on libraries}). However, to enhance modularity,
-Gnuastro (and most other bin/libraries) contain many source files, so
-the compiler is invoked many times@footnote{Nearly every command you see
-being executed after running @command{make} is one call to the
-compiler.}. This makes manual addition or modification of this option
-practically impossible.
+If the pre-processor can't find the included files, it will abort with an 
error.
+In fact a common error when building programs that depend on a library is that 
the compiler doesn't not know where a library's header is (see @ref{Known 
issues}).
+So you have to manually tell the compiler where to look for the library's 
headers with the @option{-I} option.
+For a small software with one or two source files, this can be done manually 
(see @ref{Summary and example on libraries}).
+However, to enhance modularity, Gnuastro (and most other bin/libraries) 
contain many source files, so the compiler is invoked many 
times@footnote{Nearly every command you see being executed after running 
@command{make} is one call to the compiler.}.
+This makes manual addition or modification of this option practically 
impossible.
 
 @cindex GNU build system
 @cindex @command{CPPFLAGS}
-To solve this problem, in the GNU build system, there are conventional
-environment variables for the various kinds of compiler options (or flags).
-These environment variables are used in every call to the compiler (they
-can be empty). The environment variable used for the C Pre-Processor (or
-CPP) is @command{CPPFLAGS}. By giving @command{CPPFLAGS} a value once, you
-can be sure that each call to the compiler will be affected. See @ref{Known
-issues} for an example of how to set this variable at configure time.
+To solve this problem, in the GNU build system, there are conventional 
environment variables for the various kinds of compiler options (or flags).
+These environment variables are used in every call to the compiler (they can 
be empty).
+The environment variable used for the C Pre-Processor (or CPP) is 
@command{CPPFLAGS}.
+By giving @command{CPPFLAGS} a value once, you can be sure that each call to 
the compiler will be affected.
+See @ref{Known issues} for an example of how to set this variable at configure 
time.
 
 @cindex GNU build system
-As described in @ref{Installation directory}, you can select the top
-installation directory of a software using the GNU build system, when you
-@command{./configure} it. All the separate components will be put in their
-separate sub-directory under that, for example the programs, compiled
-libraries and library headers will go into @file{$prefix/bin} (replace
-@file{$prefix} with a directory), @file{$prefix/lib}, and
-@file{$prefix/include} respectively. For enhanced modularity, libraries
-that contain diverse collections of functions (like GSL, WCSLIB, and
-Gnuastro), put their header files in a sub-directory unique to
-themselves. For example all Gnuastro's header files are installed in
-@file{$prefix/include/gnuastro}. In your source code, you need to keep the
-library's sub-directory when including the headers from such libraries, for
-example @code{#include <gnuastro/fits.h>}@footnote{the top
-@file{$prefix/include} directory is usually known to the compiler}. Not all
-libraries need to follow this convention, for example CFITSIO only has one
-header (@file{fitsio.h}) which is directly installed in
-@file{$prefix/include}.
+As described in @ref{Installation directory}, you can select the top 
installation directory of a software using the GNU build system, when you 
@command{./configure} it.
+All the separate components will be put in their separate sub-directory under 
that, for example the programs, compiled libraries and library headers will go 
into @file{$prefix/bin} (replace @file{$prefix} with a directory), 
@file{$prefix/lib}, and @file{$prefix/include} respectively.
+For enhanced modularity, libraries that contain diverse collections of 
functions (like GSL, WCSLIB, and Gnuastro), put their header files in a 
sub-directory unique to themselves.
+For example all Gnuastro's header files are installed in 
@file{$prefix/include/gnuastro}.
+In your source code, you need to keep the library's sub-directory when 
including the headers from such libraries, for example @code{#include 
<gnuastro/fits.h>}@footnote{the top @file{$prefix/include} directory is usually 
known to the compiler}.
+Not all libraries need to follow this convention, for example CFITSIO only has 
one header (@file{fitsio.h}) which is directly installed in 
@file{$prefix/include}.
 
 
 
@@ -23901,76 +17448,50 @@ header (@file{fitsio.h}) which is directly installed 
in
 @subsection Linking
 
 @cindex GNU Libtool
-To enhance modularity, similar functions are defined in one source file
-(with a @file{.c} suffix, see @ref{Headers} for more). After running
-@command{make}, each human-readable, @file{.c} file is translated (or
-compiled) into a computer-readable ``object'' file (ending with
-@file{.o}). Note that object files are also created when building programs,
-they aren't particular to libraries. Try opening Gnuastro's @file{lib/} and
-@file{bin/progname/} directories after running @command{make} to see these
-object files@footnote{Gnuastro uses GNU Libtool for portable library
-creation. Libtool will also make a @file{.lo} file for each @file{.c} file
-when building libraries (@file{.lo} files are
-human-readable).}. Afterwards, the object files are @emph{linked} together
-to create an executable program or a library.
+To enhance modularity, similar functions are defined in one source file (with 
a @file{.c} suffix, see @ref{Headers} for more).
+After running @command{make}, each human-readable, @file{.c} file is 
translated (or compiled) into a computer-readable ``object'' file (ending with 
@file{.o}).
+Note that object files are also created when building programs, they aren't 
particular to libraries.
+Try opening Gnuastro's @file{lib/} and @file{bin/progname/} directories after 
running @command{make} to see these object files@footnote{Gnuastro uses GNU 
Libtool for portable library creation.
+Libtool will also make a @file{.lo} file for each @file{.c} file when building 
libraries (@file{.lo} files are human-readable).}.
+Afterwards, the object files are @emph{linked} together to create an 
executable program or a library.
 
 @cindex GNU Binutils
-The object files contain the full definition of the functions in the
-respective @file{.c} file along with a list of any other function (or
-generally ``symbol'') that is referenced there. To get a list of those
-functions you can use the @command{nm} program which is part of GNU
-Binutils. For example from the top Gnuastro directory, run:
+The object files contain the full definition of the functions in the 
respective @file{.c} file along with a list of any other function (or generally 
``symbol'') that is referenced there.
+To get a list of those functions you can use the @command{nm} program which is 
part of GNU Binutils.
+For example from the top Gnuastro directory, run:
 
 @example
 $ nm bin/arithmetic/arithmetic.o
 @end example
 
 @noindent
-This will print a list of all the functions (more generally, `symbols')
-that were called within @file{bin/arithmetic/arithmetic.c} along with some
-further information (for example a @code{T} in the second column shows that
-this function is actually defined here, @code{U} says that it is undefined
-here). Try opening the @file{.c} file to check some of these functions for
-your self. Run @command{info nm} for more information.
+This will print a list of all the functions (more generally, `symbols') that 
were called within @file{bin/arithmetic/arithmetic.c} along with some further 
information (for example a @code{T} in the second column shows that this 
function is actually defined here, @code{U} says that it is undefined here).
+Try opening the @file{.c} file to check some of these functions for your self. 
Run @command{info nm} for more information.
 
 @cindex Linking
-To recap, the @emph{compiler} created the separate object files mentioned
-above for each @file{.c} file. The @emph{linker} will then combine all the
-symbols of the various object files (and libraries) into one program or
-library. In the case of Arithmetic (a program) the contents of the object
-files in @file{bin/arithmetic/} are copied (and re-ordered) into one final
-executable file which we can run from the operating system.
+To recap, the @emph{compiler} created the separate object files mentioned 
above for each @file{.c} file.
+The @emph{linker} will then combine all the symbols of the various object 
files (and libraries) into one program or library.
+In the case of Arithmetic (a program) the contents of the object files in 
@file{bin/arithmetic/} are copied (and re-ordered) into one final executable 
file which we can run from the operating system.
 
 @cindex Static linking
 @cindex Linking: Static
 @cindex Dynamic linking
 @cindex Linking: Dynamic
-There are two ways to @emph{link} all the necessary symbols: static and
-dynamic/shared. When the symbols (computer-readable function definitions in
-most cases) are copied into the output, it is called @emph{static}
-linking. When the symbols are kept in their original file and only a
-reference to them is kept in the executable, it is called @emph{dynamic},
-or @emph{shared} linking.
-
-Let's have a closer look at the executable to understand this better: we'll
-assume you have built Gnuastro without any customization and installed
-Gnuastro into the default @file{/usr/local/} directory (see
-@ref{Installation directory}). If you tried the @command{nm} command on one
-of Arithmetic's object files above, then with the command below you can
-confirm that all the functions that were defined in the object file above
-(had a @code{T} in the second column) are also defined in the
-@file{astarithmetic} executable:
+There are two ways to @emph{link} all the necessary symbols: static and 
dynamic/shared.
+When the symbols (computer-readable function definitions in most cases) are 
copied into the output, it is called @emph{static} linking.
+When the symbols are kept in their original file and only a reference to them 
is kept in the executable, it is called @emph{dynamic}, or @emph{shared} 
linking.
+
+Let's have a closer look at the executable to understand this better: we'll 
assume you have built Gnuastro without any customization and installed Gnuastro 
into the default @file{/usr/local/} directory (see @ref{Installation 
directory}).
+If you tried the @command{nm} command on one of Arithmetic's object files 
above, then with the command below you can confirm that all the functions that 
were defined in the object file above (had a @code{T} in the second column) are 
also defined in the @file{astarithmetic} executable:
 
 @example
 $ nm /usr/local/bin/astarithmetic
 @end example
 
 @noindent
-These symbols/function have been statically linked (copied) in the final
-executable. But you will notice that there are still many undefined symbols
-in the executable (those with a @code{U} in the second column). One class
-of such functions are Gnuastro's own library functions that start with
-`@code{gal_}':
+These symbols/function have been statically linked (copied) in the final 
executable.
+But you will notice that there are still many undefined symbols in the 
executable (those with a @code{U} in the second column).
+One class of such functions are Gnuastro's own library functions that start 
with `@code{gal_}':
 
 @example
 $ nm /usr/local/bin/astarithmetic | grep gal_
@@ -23982,68 +17503,47 @@ $ nm /usr/local/bin/astarithmetic | grep gal_
 @cindex Library: shared
 @cindex Dynamic linking
 @cindex Linking: dynamic
-These undefined symbols (functions) are present in another file and will be
-linked to the Arithmetic program every time you run it. Therefore they are
-known as dynamically @emph{linked} libraries @footnote{Do not confuse
-dynamically @emph{linked} libraries with dynamically @emph{loaded}
-libraries. The former (that is discussed here) are only loaded once at the
-program startup. However, the latter can be loaded anytime during the
-program's execution, they are also known as plugins.}. As we saw above,
-static linking is done when the executable is being built. However, when a
-program is dynamically linked to a library, at build-time, the library's
-symbols are only checked with the available libraries: they are not
-actually copied into the program's executable. Every time you run the
-program, the (dynamic) linker will be activated and will try to link the
-program to the installed library before the program starts.
-
-If you want all the libraries to be statically linked to the executables,
-you have to tell Libtool (which Gnuastro uses for the linking) to disable
-shared libraries at configure time@footnote{Libtool is very common and is
-commonly used. Therefore, you can use this option to configure on most
-programs using the GNU build system if you want static linking.}:
+These undefined symbols (functions) are present in another file and will be 
linked to the Arithmetic program every time you run it.
+Therefore they are known as dynamically @emph{linked} libraries @footnote{Do 
not confuse dynamically @emph{linked} libraries with dynamically @emph{loaded} 
libraries.
+The former (that is discussed here) are only loaded once at the program 
startup.
+However, the latter can be loaded anytime during the program's execution, they 
are also known as plugins.}.
+As we saw above, static linking is done when the executable is being built.
+However, when a program is dynamically linked to a library, at build-time, the 
library's symbols are only checked with the available libraries: they are not 
actually copied into the program's executable.
+Every time you run the program, the (dynamic) linker will be activated and 
will try to link the program to the installed library before the program starts.
+
+If you want all the libraries to be statically linked to the executables, you 
have to tell Libtool (which Gnuastro uses for the linking) to disable shared 
libraries at configure time@footnote{Libtool is very common and is commonly 
used.
+Therefore, you can use this option to configure on most programs using the GNU 
build system if you want static linking.}:
 
 @example
 $ configure --disable-shared
 @end example
 
 @noindent
-Try configuring Gnuastro with the command above, then build and install it
-(as described in @ref{Quick start}). Afterwards, check the @code{gal_}
-symbols in the installed Arithmetic executable like before. You will see
-that they are actually copied this time (have a @code{T} in the second
-column). If the second column doesn't convince you, look at the executable
-file size with the following command:
+Try configuring Gnuastro with the command above, then build and install it (as 
described in @ref{Quick start}).
+Afterwards, check the @code{gal_} symbols in the installed Arithmetic 
executable like before.
+You will see that they are actually copied this time (have a @code{T} in the 
second column).
+If the second column doesn't convince you, look at the executable file size 
with the following command:
 
 @example
 $ ls -lh /usr/local/bin/astarithmetic
 @end example
 
 @noindent
-It should be around 4.2 Megabytes with this static linking. If you
-configure and build Gnuastro again with shared libraries enabled (which is
-the default), you will notice that it is roughly 100 Kilobytes!
+It should be around 4.2 Megabytes with this static linking.
+If you configure and build Gnuastro again with shared libraries enabled (which 
is the default), you will notice that it is roughly 100 Kilobytes!
 
-This huge difference would have been very significant in the old days, but
-with the roughly Terabyte storage drives commonly in use today, it is
-negligible. Fortunately, output file size is not the only benefit of
-dynamic linking: since it links to the libraries at run-time (rather than
-build-time), you don't have to re-build a higher-level program or library
-when an update comes for one of the lower-level libraries it depends
-on. You just install the new low-level library and it will automatically be
-used/linked next time in the programs that use it. To be fair, this also
-creates a few complications@footnote{Both of these can be avoided by
-joining the mailing lists of the lower-level libraries and checking the
-changes in newer versions before installing them. Updates that result in
-such behaviors are generally heavily emphasized in the release notes.}:
+This huge difference would have been very significant in the old days, but 
with the roughly Terabyte storage drives commonly in use today, it is 
negligible.
+Fortunately, output file size is not the only benefit of dynamic linking: 
since it links to the libraries at run-time (rather than build-time), you don't 
have to re-build a higher-level program or library when an update comes for one 
of the lower-level libraries it depends on.
+You just install the new low-level library and it will automatically be 
used/linked next time in the programs that use it.
+To be fair, this also creates a few complications@footnote{Both of these can 
be avoided by joining the mailing lists of the lower-level libraries and 
checking the changes in newer versions before installing them.
+Updates that result in such behaviors are generally heavily emphasized in the 
release notes.}:
 
 @itemize
 @item
-Reproducibility: Even though your high-level tool has the same version as
-before, with the updated library, you might not get the same results.
+Reproducibility: Even though your high-level tool has the same version as 
before, with the updated library, you might not get the same results.
 @item
-Broken links: if some functions have been changed or removed in the updated
-library, then the linker will abort with an error at run-time. Therefore
-you need to re-build your higher-level program or library.
+Broken links: if some functions have been changed or removed in the updated 
library, then the linker will abort with an error at run-time.
+Therefore you need to re-build your higher-level program or library.
 @end itemize
 
 @cindex GNU C library
@@ -24056,177 +17556,118 @@ library, you might need another tool.} program, for 
example:
 $ ldd /usr/local/bin/astarithmetic
 @end example
 
-Library file names (in their installation directory) start with a
-@file{lib} and their ending (suffix) shows if they are static (@file{.a})
-or dynamic (@file{.so}), as described below. The name of the library is in
-the middle of these two, for example @file{libgsl.a} or
-@file{libgnuastro.a} (GSL and Gnuastro's static libraries), and
-@file{libgsl.so.23.0.0} or @file{libgnuastro.so.4.0.0} (GSL and Gnuastro's
-shared library, the numbers may be different).
+Library file names (in their installation directory) start with a @file{lib} 
and their ending (suffix) shows if they are static (@file{.a}) or dynamic 
(@file{.so}), as described below.
+The name of the library is in the middle of these two, for example 
@file{libgsl.a} or @file{libgnuastro.a} (GSL and Gnuastro's static libraries), 
and @file{libgsl.so.23.0.0} or @file{libgnuastro.so.4.0.0} (GSL and Gnuastro's 
shared library, the numbers may be different).
 
 @itemize
 @item
-A static library is known as an archive file and has the @file{.a}
-suffix. A static library is not an executable file.
+A static library is known as an archive file and has the @file{.a} suffix.
+A static library is not an executable file.
 
 @item
 @cindex Shared library versioning
 @cindex Versioning: Shared library
-A shared library ends with the @file{.so.X.Y.Z} suffix and is
-executable. The three numbers in the suffix, describe the version of the
-shared library. Shared library versions are defined to allow multiple
-versions of a shared library simultaneously on a system and to help detect
-possible updates in the library and programs that depend on it by the
-linker.
-
-It is very important to mention that this version number is different from
-the software version number (see @ref{Version numbering}), so do not
-confuse the two. See the ``Library interface versions'' chapter of GNU
-Libtool for more.
-
-For each shared library, we also have two symbolic links ending with
-@file{.so.X} and @file{.so}. They are automatically set by the installer,
-but you can change them (point them to another version of the library) when
-you have multiple versions of a library on your system.
+A shared library ends with the @file{.so.X.Y.Z} suffix and is executable.
+The three numbers in the suffix, describe the version of the shared library.
+Shared library versions are defined to allow multiple versions of a shared 
library simultaneously on a system and to help detect possible updates in the 
library and programs that depend on it by the linker.
+
+It is very important to mention that this version number is different from the 
software version number (see @ref{Version numbering}), so do not confuse the 
two.
+See the ``Library interface versions'' chapter of GNU Libtool for more.
+
+For each shared library, we also have two symbolic links ending with 
@file{.so.X} and @file{.so}.
+They are automatically set by the installer, but you can change them (point 
them to another version of the library) when you have multiple versions of a 
library on your system.
 
 @end itemize
 
 @cindex GNU Libtool
-Libraries that are built with GNU Libtool (including Gnuastro and its
-dependencies), build both static and dynamic libraries by default and
-install them in @file{prefix/lib/} directory (for more on @file{prefix},
-see @ref{Installation directory}). In this way, programs depending on the
-libraries can link with them however they prefer. See the contents of
-@file{/usr/local/lib} with the command below to see both the static and
-shared libraries available there, along with their executable nature and
-the symbolic links:
+Libraries that are built with GNU Libtool (including Gnuastro and its 
dependencies), build both static and dynamic libraries by default and install 
them in @file{prefix/lib/} directory (for more on @file{prefix}, see 
@ref{Installation directory}).
+In this way, programs depending on the libraries can link with them however 
they prefer.
+See the contents of @file{/usr/local/lib} with the command below to see both 
the static and shared libraries available there, along with their executable 
nature and the symbolic links:
 
 @example
 $ ls -l /usr/local/lib/
 @end example
 
-To link with a library, the linker needs to know where to find the
-library. @emph{At compilation time}, these locations can be passed to the
-linker with two separate options (see @ref{Summary and example on
-libraries} for an example) as described below. You can see these options
-and their usage in practice while building Gnuastro (after running
-@command{make}):
+To link with a library, the linker needs to know where to find the library.
+@emph{At compilation time}, these locations can be passed to the linker with 
two separate options (see @ref{Summary and example on libraries} for an 
example) as described below.
+You can see these options and their usage in practice while building Gnuastro 
(after running @command{make}):
 
 @table @option
 @item -L DIR
-Will tell the linker to look into @file{DIR} for the libraries. For example
-@file{-L/usr/local/lib}, or @file{-L/home/yourname/.local/lib}. You can
-make multiple calls to this option, so the linker looks into several
-directories at compilation time. Note that the space between @key{L} and
-the directory is optional and commonly ignored (written as @option{-LDIR}).
+Will tell the linker to look into @file{DIR} for the libraries.
+For example @file{-L/usr/local/lib}, or @file{-L/home/yourname/.local/lib}.
+You can make multiple calls to this option, so the linker looks into several 
directories at compilation time.
+Note that the space between @key{L} and the directory is optional and commonly 
ignored (written as @option{-LDIR}).
 
 @item -lLIBRARY
-Specify the unique library identifier/name (not containing directory or
-shared/dynamic nature) to be linked with the executable. As discussed
-above, library file names have fixed parts which must not be given to this
-option. So @option{-lgsl} will guide the linker to either look for
-@file{libgsl.a} or @file{libgsl.so} (depending on the type of linking it is
-suppose to do). You can link many libraries by repeated calls to this
-option.
-
-@strong{Very important: } The place of this option on the compiler's
-command matters. This is often a source of confusion for beginners, so
-let's assume you have asked the linker to link with library A using this
-option. As soon as the linker confronts this option, it looks into the list
-of the undefined symbols it has found until that point and does a search in
-library A for any of those symbols. If any pending undefined symbol is
-found in library A, it is used. After the search in undefined symbols is
-complete, the contents of library A are completely discarded from the
-linker's memory. Therefore, if a later object file or library uses an
-unlinked symbol in library A, the linker will abort after it has finished
-its search in all the input libraries or object files.
-
-As an example, Gnuastro's @code{gal_fits_img_read} function depends on the
-@code{fits_read_pix} function of CFITSIO (specified with
-@option{-lcfitsio}, which in turn depends on the cURL library, called with
-@option{-lcurl}). So the proper way to link something that uses this
-function is @option{-lgnuastro -lcfitsio -lcurl}. If instead, you give:
-@option{-lcfitsio -lgnuastro} the linker will complain and abort. To avoid
-such linking complexities when using Gnuastro's library, we recommend using
-@ref{BuildProgram}.
+Specify the unique library identifier/name (not containing directory or 
shared/dynamic nature) to be linked with the executable.
+As discussed above, library file names have fixed parts which must not be 
given to this option.
+So @option{-lgsl} will guide the linker to either look for @file{libgsl.a} or 
@file{libgsl.so} (depending on the type of linking it is suppose to do).
+You can link many libraries by repeated calls to this option.
+
+@strong{Very important: } The place of this option on the compiler's command 
matters.
+This is often a source of confusion for beginners, so let's assume you have 
asked the linker to link with library A using this option.
+As soon as the linker confronts this option, it looks into the list of the 
undefined symbols it has found until that point and does a search in library A 
for any of those symbols.
+If any pending undefined symbol is found in library A, it is used.
+After the search in undefined symbols is complete, the contents of library A 
are completely discarded from the linker's memory.
+Therefore, if a later object file or library uses an unlinked symbol in 
library A, the linker will abort after it has finished its search in all the 
input libraries or object files.
+
+As an example, Gnuastro's @code{gal_fits_img_read} function depends on the 
@code{fits_read_pix} function of CFITSIO (specified with @option{-lcfitsio}, 
which in turn depends on the cURL library, called with @option{-lcurl}).
+So the proper way to link something that uses this function is 
@option{-lgnuastro -lcfitsio -lcurl}.
+If instead, you give: @option{-lcfitsio -lgnuastro} the linker will complain 
and abort.
+To avoid such linking complexities when using Gnuastro's library, we recommend 
using @ref{BuildProgram}.
 
 @end table
 
-If you have compiled and linked your program with a dynamic library, then
-the dynamic linker also needs to know the location of the libraries after
-building the program: @emph{every time} the program is run
-afterwards. Therefore, it may happen that you don't get any errors when
-compiling/linking a program, but are unable to run your program because of
-a failure to find a library. This happens because the dynamic linker hasn't
-found the dynamic library @emph{at run time}.
+If you have compiled and linked your program with a dynamic library, then the 
dynamic linker also needs to know the location of the libraries after building 
the program: @emph{every time} the program is run afterwards.
+Therefore, it may happen that you don't get any errors when compiling/linking 
a program, but are unable to run your program because of a failure to find a 
library.
+This happens because the dynamic linker hasn't found the dynamic library 
@emph{at run time}.
 
-To find the dynamic libraries at run-time, the linker looks into the paths,
-or directories, in the @code{LD_LIBRARY_PATH} environment variable. For a
-discussion on environment variables, especially search paths like
-@code{LD_LIBRARY_PATH}, and how you can add new directories to them, see
-@ref{Installation directory}.
+To find the dynamic libraries at run-time, the linker looks into the paths, or 
directories, in the @code{LD_LIBRARY_PATH} environment variable.
+For a discussion on environment variables, especially search paths like 
@code{LD_LIBRARY_PATH}, and how you can add new directories to them, see 
@ref{Installation directory}.
 
 
 
 @node Summary and example on libraries,  , Linking, Review of library 
fundamentals
 @subsection Summary and example on libraries
 
-After the mostly abstract discussions of @ref{Headers} and @ref{Linking},
-we'll give a small tutorial here. But before that, let's recall the general
-steps of how your source code is prepared, compiled and linked to the
-libraries it depends on so you can run it:
+After the mostly abstract discussions of @ref{Headers} and @ref{Linking}, 
we'll give a small tutorial here.
+But before that, let's recall the general steps of how your source code is 
prepared, compiled and linked to the libraries it depends on so you can run it:
 
 @enumerate
 @item
-The @strong{pre-processor} includes the header (@file{.h}) files into the
-function definition (@file{.c}) files, expands pre-processor macros and
-generally prepares the human-readable source for compilation (reviewed in
-@ref{Headers}).
+The @strong{pre-processor} includes the header (@file{.h}) files into the 
function definition (@file{.c}) files, expands pre-processor macros.
+Generally the pre-processor prepares the human-readable source for compilation 
(reviewed in @ref{Headers}).
 
 @item
-The @strong{compiler} will translate (compile) the human-readable contents
-of each source (merged @file{.c} and the @file{.h} files, or generally the
-output of the pre-processor) into the computer-readable code of @file{.o}
-files.
+The @strong{compiler} will translate (compile) the human-readable contents of 
each source (merged @file{.c} and the @file{.h} files, or generally the output 
of the pre-processor) into the computer-readable code of @file{.o} files.
 
 @item
-The @strong{linker} will link the called function definitions from various
-compiled files to create one unified object. When the unified product has a
-@code{main} function, this function is the product's only entry point,
-enabling the operating system or user to directly interact with it, so the
-product is a program. When the product doesn't have a @code{main} function,
-the linker's product is a library and its exported functions can be linked
-to other executables (it has many entry points).
+The @strong{linker} will link the called function definitions from various 
compiled files to create one unified object.
+When the unified product has a @code{main} function, this function is the 
product's only entry point, enabling the operating system or user to directly 
interact with it, so the product is a program.
+When the product doesn't have a @code{main} function, the linker's product is 
a library and its exported functions can be linked to other executables (it has 
many entry points).
 @end enumerate
 
 @cindex GCC
 @cindex GNU Compiler Collection
-The GNU Compiler Collection (or GCC for short) will do all three steps. So
-as a first example, from Gnuastro's source, go to @file{tests/lib/}. This
-directory contains the library tests, you can use these as some simple
-tutorials. For this demonstration, we will compile and run the
-@file{arraymanip.c}. This small program will call Gnuastro library for some
-simple operations on an array (open it and have a look). To compile this
-program, run this command inside the directory containing it.
+The GNU Compiler Collection (or GCC for short) will do all three steps.
+So as a first example, from Gnuastro's source, go to @file{tests/lib/}.
+This directory contains the library tests, you can use these as some simple 
tutorials.
+For this demonstration, we will compile and run the @file{arraymanip.c}.
+This small program will call Gnuastro library for some simple operations on an 
array (open it and have a look).
+To compile this program, run this command inside the directory containing it.
 
 @example
 $ gcc arraymanip.c -lgnuastro -lm -o arraymanip
 @end example
 
 @noindent
-The two @option{-lgnuastro} and @option{-lm} options (in this order) tell
-GCC to first link with the Gnuastro library and then with C's math
-library. The @option{-o} option is used to specify the name of the output
-executable, without it the output file name will be @file{a.out} (on most
-OSs), independent of your input file name(s).
+The two @option{-lgnuastro} and @option{-lm} options (in this order) tell GCC 
to first link with the Gnuastro library and then with C's math library.
+The @option{-o} option is used to specify the name of the output executable, 
without it the output file name will be @file{a.out} (on most OSs), independent 
of your input file name(s).
 
-If your top Gnuastro installation directory (let's call it @file{$prefix},
-see @ref{Installation directory}) is not recognized by GCC, you will get
-pre-processor errors for unknown header files. Once you fix it, you will
-get linker errors for undefined functions. To fix both, you should run GCC
-as follows: additionally telling it which directories it can find
-Gnuastro's headers and compiled library (see @ref{Headers} and
-@ref{Linking}):
+If your top Gnuastro installation directory (let's call it @file{$prefix}, see 
@ref{Installation directory}) is not recognized by GCC, you will get 
pre-processor errors for unknown header files.
+Once you fix it, you will get linker errors for undefined functions.
+To fix both, you should run GCC as follows: additionally telling it which 
directories it can find Gnuastro's headers and compiled library (see 
@ref{Headers} and @ref{Linking}):
 
 @example
 $ gcc -I$prefix/include -L$prefix/lib arraymanip.c -lgnuastro -lm     \
@@ -24234,53 +17675,35 @@ $ gcc -I$prefix/include -L$prefix/lib arraymanip.c 
-lgnuastro -lm     \
 @end example
 
 @noindent
-This single command has done all the pre-processor, compilation and linker
-operations. Therefore no intermediate files (object files in particular)
-were created, only a single output executable was created. You are now
-ready to run the program with:
+This single command has done all the pre-processor, compilation and linker 
operations.
+Therefore no intermediate files (object files in particular) were created, 
only a single output executable was created.
+You are now ready to run the program with:
 
 @example
 $ ./arraymanip
 @end example
 
-The Gnuastro functions called by this program only needed to be linked with
-the C math library. But if your program needs WCS coordinate
-transformations, needs to read a FITS file, needs special math operations
-(which include its linear algebra operations), or you want it to run on
-multiple CPU threads, you also need to add these libraries in the call to
-GCC: @option{-lgnuastro -lwcs -lcfitsio -lgsl -lgslcblas -pthread -lm}. In
-@ref{Gnuastro library}, where each function is documented, it is mentioned
-which libraries (if any) must also be linked when you call a function. If
-you feel all these linkings can be confusing, please consider Gnuastro's
-@ref{BuildProgram} program.
+The Gnuastro functions called by this program only needed to be linked with 
the C math library.
+But if your program needs WCS coordinate transformations, needs to read a FITS 
file, needs special math operations (which include its linear algebra 
operations), or you want it to run on multiple CPU threads, you also need to 
add these libraries in the call to GCC: @option{-lgnuastro -lwcs -lcfitsio 
-lgsl -lgslcblas -pthread -lm}.
+In @ref{Gnuastro library}, where each function is documented, it is mentioned 
which libraries (if any) must also be linked when you call a function.
+If you feel all these linkings can be confusing, please consider Gnuastro's 
@ref{BuildProgram} program.
 
 
 @node BuildProgram, Gnuastro library, Review of library fundamentals, Library
 @section BuildProgram
-The number and order of libraries that are necessary for linking a program
-with Gnuastro library might be too confusing when you need to compile a
-small program for one particular job (with one source file). BuildProgram
-will use the information gathered during configuring Gnuastro and link with
-all the appropriate libraries on your system. This will allow you to easily
-compile, link and run programs that use Gnuastro's library with one simple
-command and not worry about which libraries to link to, or the linking
-order.
+The number and order of libraries that are necessary for linking a program 
with Gnuastro library might be too confusing when you need to compile a small 
program for one particular job (with one source file).
+BuildProgram will use the information gathered during configuring Gnuastro and 
link with all the appropriate libraries on your system.
+This will allow you to easily compile, link and run programs that use 
Gnuastro's library with one simple command and not worry about which libraries 
to link to, or the linking order.
 
 @cindex GNU Libtool
-BuildProgram uses GNU Libtool to find the necessary libraries to link
-against (GNU Libtool is the same program that builds all of Gnuastro's
-libraries and programs when you run @code{make}). So in the future, if
-Gnuastro's prerequisite libraries change or other libraries are added, you
-don't have to worry, you can just run BuildProgram and internal linking
-will be done correctly.
+BuildProgram uses GNU Libtool to find the necessary libraries to link against 
(GNU Libtool is the same program that builds all of Gnuastro's libraries and 
programs when you run @code{make}).
+So in the future, if Gnuastro's prerequisite libraries change or other 
libraries are added, you don't have to worry, you can just run BuildProgram and 
internal linking will be done correctly.
 
 @cartouche
 @noindent
-@strong{BuildProgram requires GNU Libtool:} BuildProgram depends on GNU
-Libtool, other implementations don't have some necessary features. If GNU
-Libtool isn't available at Gnuastro's configure time, you will get a notice
-at the end of the configuration step and BuildProgram will not be built or
-installed. Please see @ref{Optional dependencies} for more information.
+@strong{BuildProgram requires GNU Libtool:} BuildProgram depends on GNU 
Libtool, other implementations don't have some necessary features.
+If GNU Libtool isn't available at Gnuastro's configure time, you will get a 
notice at the end of the configuration step and BuildProgram will not be built 
or installed.
+Please see @ref{Optional dependencies} for more information.
 @end cartouche
 
 @menu
@@ -24290,10 +17713,8 @@ installed. Please see @ref{Optional dependencies} for 
more information.
 @node Invoking astbuildprog,  , BuildProgram, BuildProgram
 @subsection Invoking BuildProgram
 
-BuildProgram will compile and link a C source program with Gnuastro's
-library and all its dependencies, greatly facilitating the compilation and
-running of small programs that use Gnuastro's library. The executable name
-is @file{astbuildprog} with the following general template:
+BuildProgram will compile and link a C source program with Gnuastro's library 
and all its dependencies, greatly facilitating the compilation and running of 
small programs that use Gnuastro's library.
+The executable name is @file{astbuildprog} with the following general template:
 
 @example
 $ astbuildprog [OPTION...] C_SOURCE_FILE
@@ -24321,30 +17742,21 @@ $ astbuildprog -Lother -Iother/dir myprogram.c
 $ astbuildprog --onlybuild myprogram.c
 @end example
 
-If BuildProgram is to run, it needs a C programming language source file as
-input. By default it will compile and link the program to build the a final
-executable file and run it. The built executable name can be set with the
-optional @option{--output} option. When no output name is set, BuildProgram
-will use Gnuastro's @ref{Automatic output}, and remove the suffix of the
-input and use that as the output name. For the full list of options that
-BuildProgram shares with other Gnuastro programs, see @ref{Common
-options}. You may also use Gnuastro's @ref{Configuration files} to specify
-other libraries/headers to use for special directories and not have to type
-them in every time.
-
-The first argument is considered to be the C source file that must be
-compiled and linked. Any other arguments (non-option tokens on the
-command-line) will be passed onto the program when BuildProgram wants to
-run it. Recall that by default BuildProgram will run the program after
-building it. This behavior can be disabled with the @code{--onlybuild}
-option.
+If BuildProgram is to run, it needs a C programming language source file as 
input.
+By default it will compile and link the program to build the a final 
executable file and run it.
+The built executable name can be set with the optional @option{--output} 
option.
+When no output name is set, BuildProgram will use Gnuastro's @ref{Automatic 
output}, and remove the suffix of the input and use that as the output name.
+For the full list of options that BuildProgram shares with other Gnuastro 
programs, see @ref{Common options}.
+You may also use Gnuastro's @ref{Configuration files} to specify other 
libraries/headers to use for special directories and not have to type them in 
every time.
+
+The first argument is considered to be the C source file that must be compiled 
and linked.
+Any other arguments (non-option tokens on the command-line) will be passed 
onto the program when BuildProgram wants to run it.
+Recall that by default BuildProgram will run the program after building it.
+This behavior can be disabled with the @code{--onlybuild} option.
 
 @cindex GNU Make
-When the @option{--quiet} option (see @ref{Operating mode options}) is not
-called, BuildPrograms will print the compilation and running commands. Once
-your program grows and you break it up into multiple files (which are much
-more easily managed with Make), you can use the linking flags of the
-non-quiet output in your @code{Makefile}.
+When the @option{--quiet} option (see @ref{Operating mode options}) is not 
called, BuildPrograms will print the compilation and running commands.
+Once your program grows and you break it up into multiple files (which are 
much more easily managed with Make), you can use the linking flags of the 
non-quiet output in your @code{Makefile}.
 
 @table @option
 @item -I STR
@@ -24352,112 +17764,86 @@ non-quiet output in your @code{Makefile}.
 @cindex GNU CPP
 @cindex C Pre-Processor
 Directory to search for files that you @code{#include} in your C program.
-Note that headers relating to Gnuastro and its dependencies don't need this
-option. This is only necessary if you want to use other headers. It may be
-called multiple times and order matters. This directory will be searched
-before those of Gnuastro's build and also the system search
-directories. See @ref{Headers} for a thorough introduction.
-
-From the GNU C Pre-Processor manual: ``Add the directory @code{STR} to the
-list of directories to be searched for header files. Directories named by
-@option{-I} are searched before the standard system include directories.
-If the directory @code{STR} is a standard system include directory, the
-option is ignored to ensure that the default search order for system
-directories and the special treatment of system headers are not defeated''.
+Note that headers relating to Gnuastro and its dependencies don't need this 
option.
+This is only necessary if you want to use other headers.
+It may be called multiple times and order matters.
+This directory will be searched before those of Gnuastro's build and also the 
system search directories.
+See @ref{Headers} for a thorough introduction.
+
+From the GNU C Pre-Processor manual: ``Add the directory @code{STR} to the 
list of directories to be searched for header files.
+Directories named by @option{-I} are searched before the standard system 
include directories.
+If the directory @code{STR} is a standard system include directory, the option 
is ignored to ensure that the default search order for system directories and 
the special treatment of system headers are not defeated''.
 
 @item -L STR
 @itemx --linkdir=STR
 @cindex GNU Libtool
-Directory to search for compiled libraries to link the program with. Note
-that all the directories that Gnuastro was built with will already be used
-by BuildProgram (GNU Libtool). This option is only necessary if your
-libraries are in other directories. Multiple calls to this option are
-possible and order matters. This directory will be searched before those of
-Gnuastro's build and also the system search directories. See @ref{Linking}
-for a thorough introduction.
+Directory to search for compiled libraries to link the program with.
+Note that all the directories that Gnuastro was built with will already be 
used by BuildProgram (GNU Libtool).
+This option is only necessary if your libraries are in other directories.
+Multiple calls to this option are possible and order matters.
+This directory will be searched before those of Gnuastro's build and also the 
system search directories.
+See @ref{Linking} for a thorough introduction.
 
 @item -l STR
 @itemx --linklib=STR
-Library to link with your program. Note that all the libraries that
-Gnuastro was built with will already be linked by BuildProgram (GNU
-Libtool). This option is only necessary if you want to link with other
-directories. Multiple calls to this option are possible and order
-matters. This library will be linked before Gnuastro's library or its
-dependencies. See @ref{Linking} for a thorough introduction.
+Library to link with your program.
+Note that all the libraries that Gnuastro was built with will already be 
linked by BuildProgram (GNU Libtool).
+This option is only necessary if you want to link with other directories.
+Multiple calls to this option are possible and order matters.
+This library will be linked before Gnuastro's library or its dependencies.
+See @ref{Linking} for a thorough introduction.
 
 @item -O INT/STR
 @itemx --optimize=INT/STR
 @cindex Optimization
 @cindex GNU Compiler Collection
-Compiler optimization level: 0 (for no optimization, good debugging), 1, 2,
-3 (for the highest level of optimizations). From the GNU Compiler
-Collection (GCC) manual: ``Without any optimization option, the compiler's
-goal is to reduce the cost of compilation and to make debugging produce the
-expected results.  Statements are independent: if you stop the program with
-a break point between statements, you can then assign a new value to any
-variable or change the program counter to any other statement in the
-function and get exactly the results you expect from the source
-code. Turning on optimization flags makes the compiler attempt to improve
-the performance and/or code size at the expense of compilation time and
-possibly the ability to debug the program.'' Please see your compiler's
-manual for the full list of acceptable values to this option.
+Compiler optimization level: 0 (for no optimization, good debugging), 1, 2, 3 
(for the highest level of optimizations).
+From the GNU Compiler Collection (GCC) manual: ``Without any optimization 
option, the compiler's goal is to reduce the cost of compilation and to make 
debugging produce the expected results.
+Statements are independent: if you stop the program with a break point between 
statements, you can then assign a new value to any variable or change the 
program counter to any other statement in the function and get exactly the 
results you expect from the source code.
+Turning on optimization flags makes the compiler attempt to improve the 
performance and/or code size at the expense of compilation time and possibly 
the ability to debug the program.'' Please see your compiler's manual for the 
full list of acceptable values to this option.
 
 @item -g
 @itemx --debug
 @cindex Debug
-Emit extra information in the compiled binary for use by a debugger. When
-calling this option, it is best to explicitly disable optimization with
-@option{-O0}. To combine both options you can run @option{-gO0} (see
-@ref{Options} for how short options can be merged into one).
+Emit extra information in the compiled binary for use by a debugger.
+When calling this option, it is best to explicitly disable optimization with 
@option{-O0}.
+To combine both options you can run @option{-gO0} (see @ref{Options} for how 
short options can be merged into one).
 
 @item -W STR
 @itemx --warning=STR
-Print compiler warnings on command-line during compilation. ``Warnings are
-diagnostic messages that report constructions that are not inherently
-erroneous but that are risky or suggest there may have been an error.''
-(from the GCC manual). It is always recommended to compile your programs
-with warnings enabled.
-
-All compiler warning options that start with @option{W} are usable by this
-option in BuildProgram also, see your compiler's manual for the full
-list. Some of the most common values to this option are: @option{pedantic}
-(Warnings related to standard C) and @option{all} (all issues the compiler
-confronts).
+Print compiler warnings on command-line during compilation.
+``Warnings are diagnostic messages that report constructions that are not 
inherently erroneous but that are risky or suggest there may have been an 
error.'' (from the GCC manual).
+It is always recommended to compile your programs with warnings enabled.
+
+All compiler warning options that start with @option{W} are usable by this 
option in BuildProgram also, see your compiler's manual for the full list.
+Some of the most common values to this option are: @option{pedantic} (Warnings 
related to standard C) and @option{all} (all issues the compiler confronts).
 
 @item -t
 @itemx --tag=STR
-The language configuration information. Libtool can build objects and
-libraries in many languages. In many cases, it can identify the language
-automatically, but when it doesn't you can use this option to explicitly
-notify Libtool of the language. The acceptable values are: @code{CC} for C,
-@code{CXX} for C++, @code{GCJ} for Java, @code{F77} for Fortran 77,
-@code{FC} for Fortran, @code{GO} for Go and @code{RC} for Windows
-Resource. Note that the Gnuastro library is not yet fully compatible with
-all these languages.
+The language configuration information.
+Libtool can build objects and libraries in many languages.
+In many cases, it can identify the language automatically, but when it doesn't 
you can use this option to explicitly notify Libtool of the language.
+The acceptable values are: @code{CC} for C, @code{CXX} for C++, @code{GCJ} for 
Java, @code{F77} for Fortran 77, @code{FC} for Fortran, @code{GO} for Go and 
@code{RC} for Windows Resource.
+Note that the Gnuastro library is not yet fully compatible with all these 
languages.
 
 @item -b
 @itemx --onlybuild
-Only build the program, don't run it. By default, the built program is
-immediately run afterwards.
+Only build the program, don't run it.
+By default, the built program is immediately run afterwards.
 
 @item -d
 @itemx --deletecompiled
-Delete the compiled binary file after running it. This option is only
-relevant when the compiled program is run after being built. In other
-words, it is only relevant when @option{--onlybuild} is not called. It can
-be useful when you are busy testing a program or just want a fast result
-and the actual binary/compiled file is not of later use.
+Delete the compiled binary file after running it.
+This option is only relevant when the compiled program is run after being 
built.
+In other words, it is only relevant when @option{--onlybuild} is not called.
+It can be useful when you are busy testing a program or just want a fast 
result and the actual binary/compiled file is not of later use.
 
 @item -a STR
 @itemx --la=STR
-Use the given @file{.la} file (Libtool control file) instead of the one
-that was produced from Gnuastro's configuration results. The Libtool
-control file keeps all the necessary information for building and linking a
-program with a library built by Libtool. The default
-@file{prefix/lib/libgnuastro.la} keeps all the information necessary to
-build a program using the Gnuastro library gathered during configure time
-(see @ref{Installation directory} for prefix). This option is useful when
-you prefer to use another Libtool control file.
+Use the given @file{.la} file (Libtool control file) instead of the one that 
was produced from Gnuastro's configuration results.
+The Libtool control file keeps all the necessary information for building and 
linking a program with a library built by Libtool.
+The default @file{prefix/lib/libgnuastro.la} keeps all the information 
necessary to build a program using the Gnuastro library gathered during 
configure time (see @ref{Installation directory} for prefix).
+This option is useful when you prefer to use another Libtool control file.
 @end table
 
 
@@ -24467,55 +17853,38 @@ you prefer to use another Libtool control file.
 @node Gnuastro library, Library demo programs, BuildProgram, Library
 @section Gnuastro library
 
-Gnuastro library's programming constructs (function declarations, macros,
-data structures, or global variables) are classified by context into
-multiple header files (see @ref{Headers})@footnote{Within Gnuastro's
-source, all installed @file{.h} files in @file{lib/gnuastro/} are
-accompanied by a @file{.c} file in @file{/lib/}.}. In this section, the
-functions in each header will be discussed under a separate sub-section,
-which includes the name of the header. Assuming a function declaration is
-in @file{headername.h}, you can include its declaration in your source code
-with:
+Gnuastro library's programming constructs (function declarations, macros, data 
structures, or global variables) are classified by context into multiple header 
files (see @ref{Headers})@footnote{Within Gnuastro's source, all installed 
@file{.h} files in @file{lib/gnuastro/} are accompanied by a @file{.c} file in 
@file{/lib/}.}.
+In this section, the functions in each header will be discussed under a 
separate sub-section, which includes the name of the header.
+Assuming a function declaration is in @file{headername.h}, you can include its 
declaration in your source code with:
 
 @example
 # include <gnuastro/headername.h>
 @end example
 
 @noindent
-The names of all constructs in @file{headername.h} are prefixed with
-@code{gal_headername_} (or @code{GAL_HEADERNAME_} for macros). The
-@code{gal_} prefix stands for @emph{G}NU @emph{A}stronomy @emph{L}ibrary.
+The names of all constructs in @file{headername.h} are prefixed with 
@code{gal_headername_} (or @code{GAL_HEADERNAME_} for macros).
+The @code{gal_} prefix stands for @emph{G}NU @emph{A}stronomy @emph{L}ibrary.
 
-Gnuastro library functions are compiled into a single file which can be
-linked on the command-line with the @option{-lgnuastro} option. See
-@ref{Linking} and @ref{Summary and example on libraries} for an
-introduction on linking and some fully working examples of the
-libraries.
+Gnuastro library functions are compiled into a single file which can be linked 
on the command-line with the @option{-lgnuastro} option.
+See @ref{Linking} and @ref{Summary and example on libraries} for an 
introduction on linking and some fully working examples of the libraries.
 
-Gnuastro's library is a high-level library which depends on lower level
-libraries for some operations (see @ref{Dependencies}). Therefore if at
-least one of Gnuastro's functions in your program use functions from the
-dependencies, you will also need to link those dependencies after linking
-with Gnuastro. See @ref{BuildProgram} for a convenient way to deal with the
-dependencies. BuildProgram will take care of the libraries to link with
-your program (which uses the Gnuastro library), and can even run the built
-program afterwards. Therefore it allows you to conveniently focus on your
-exciting science/research when using Gnuastro's libraries.
+Gnuastro's library is a high-level library which depends on lower level 
libraries for some operations (see @ref{Dependencies}).
+Therefore if at least one of Gnuastro's functions in your program use 
functions from the dependencies, you will also need to link those dependencies 
after linking with Gnuastro.
+See @ref{BuildProgram} for a convenient way to deal with the dependencies.
+BuildProgram will take care of the libraries to link with your program (which 
uses the Gnuastro library), and can even run the built program afterwards.
+Therefore it allows you to conveniently focus on your exciting 
science/research when using Gnuastro's libraries.
 
 
 
 @cartouche
 @noindent
-@strong{Libraries are still under heavy development: } Gnuastro was
-initially created to be a collection of command-line programs. However, as
-the programs and their the shared functions grew, internal (not installed)
-libraries were added. Since the 0.2 release, the libraries are
-install-able. Hence the libraries are currently under heavy development and
-will significantly evolve between releases and will become more mature and
-stable in due time. It will stabilize with the removal of this
-notice. Check the @file{NEWS} file for interface changes. If you use the
-Info version of this manual (see @ref{Info}), you don't have to worry: the
-documentation will correspond to your installed version.
+@strong{Libraries are still under heavy development: } Gnuastro was initially 
created to be a collection of command-line programs.
+However, as the programs and their the shared functions grew, internal (not 
installed) libraries were added.
+Since the 0.2 release, the libraries are install-able.
+Hence the libraries are currently under heavy development and will 
significantly evolve between releases and will become more mature and stable in 
due time.
+It will stabilize with the removal of this notice.
+Check the @file{NEWS} file for interface changes.
+If you use the Info version of this manual (see @ref{Info}), you don't have to 
worry: the documentation will correspond to your installed version.
 @end cartouche
 
 @menu
@@ -24545,26 +17914,20 @@ documentation will correspond to your installed 
version.
 * Convolution functions::       Library functions to do convolution.
 * Interpolation::               Interpolate (over blank values possibly).
 * Git wrappers::                Wrappers for functions in libgit2.
-* Spectral lines library::
+* Spectral lines library::      Functions for operating on Spectral lines.
 * Cosmology library::           Cosmological calculations.
 @end menu
 
 @node Configuration information, Multithreaded programming, Gnuastro library, 
Gnuastro library
 @subsection Configuration information (@file{config.h})
 
-The @file{gnuastro/config.h} header contains information about the full
-Gnuastro installation on your system. Gnuastro developers should note that
-this is the only header that is not available within Gnuastro, it is only
-available to a Gnuastro library user @emph{after} installation. Within
-Gnuastro, @file{config.h} (which is included in every Gnuastro @file{.c}
-file, see @ref{Coding conventions}) has more than enough information about
-the overall Gnuastro installation.
+The @file{gnuastro/config.h} header contains information about the full 
Gnuastro installation on your system.
+Gnuastro developers should note that this is the only header that is not 
available within Gnuastro, it is only available to a Gnuastro library user 
@emph{after} installation.
+Within Gnuastro, @file{config.h} (which is included in every Gnuastro 
@file{.c} file, see @ref{Coding conventions}) has more than enough information 
about the overall Gnuastro installation.
 
 @deffn Macro GAL_CONFIG_VERSION
-This macro can be used as a string
-literal@footnote{@url{https://en.wikipedia.org/wiki/String_literal}}
-containing the version of Gnuastro that is being used. See @ref{Version
-numbering} for the version formats. For example:
+This macro can be used as a string 
literal@footnote{@url{https://en.wikipedia.org/wiki/String_literal}} containing 
the version of Gnuastro that is being used.
+See @ref{Version numbering} for the version formats. For example:
 
 @example
 printf("Gnuastro version: %s\n", GAL_CONFIG_VERSION);
@@ -24580,38 +17943,31 @@ char *gnuastro_version=GAL_CONFIG_VERSION;
 
 
 @deffn Macro GAL_CONFIG_HAVE_LIBGIT2
-Libgit2 is an optional dependency of Gnuastro (see @ref{Optional
-dependencies}). When it is installed and detected at configure time, this
-macro will have a value of @code{1} (one). Otherwise, it will have a value
-of @code{0} (zero). Gnuastro also comes with some wrappers to make it
-easier to use libgit2 (see @ref{Git wrappers}).
+Libgit2 is an optional dependency of Gnuastro (see @ref{Optional 
dependencies}).
+When it is installed and detected at configure time, this macro will have a 
value of @code{1} (one).
+Otherwise, it will have a value of @code{0} (zero).
+Gnuastro also comes with some wrappers to make it easier to use libgit2 (see 
@ref{Git wrappers}).
 @end deffn
 
 @deffn Macro GAL_CONFIG_HAVE_FITS_IS_REENTRANT
 @cindex CFITSIO
-This macro will have a value of 1 when the CFITSIO of the host system has
-the @code{fits_is_reentrant} function (available from CFITSIO version
-3.30). This function is used to see if CFITSIO was configured to read a
-FITS file simultaneously on different threads.
+This macro will have a value of 1 when the CFITSIO of the host system has the 
@code{fits_is_reentrant} function (available from CFITSIO version 3.30).
+This function is used to see if CFITSIO was configured to read a FITS file 
simultaneously on different threads.
 @end deffn
 
 
 @deffn Macro GAL_CONFIG_HAVE_WCSLIB_VERSION
-WCSLIB is the reference library for world coordinate system transformation
-(see @ref{WCSLIB} and @ref{World Coordinate System}). However, only more
-recent versions of WCSLIB also provide its version number. If the WCSLIB
-that is installed on the system provides its version (through the possibly
-existing @code{wcslib_version} function), this macro will have a value of
-one, otherwise it will have a value of zero.
+WCSLIB is the reference library for world coordinate system transformation 
(see @ref{WCSLIB} and @ref{World Coordinate System}).
+However, only more recent versions of WCSLIB also provide its version number.
+If the WCSLIB that is installed on the system provides its version (through 
the possibly existing @code{wcslib_version} function), this macro will have a 
value of one, otherwise it will have a value of zero.
 @end deffn
 
 @deffn Macro GAL_CONFIG_HAVE_PTHREAD_BARRIER
-The POSIX threads standard define barriers as an optional
-requirement. Therefore, some operating systems choose to not include it. As
-one of the @command{./configure} step checks, Gnuastro we check if your
-system has this POSIX thread barriers. If so, this macro will have a value
-of @code{1}, otherwise it will have a value of @code{0}. see
-@ref{Implementation of pthread_barrier} for more.
+The POSIX threads standard define barriers as an optional requirement.
+Therefore, some operating systems choose to not include it.
+As one of the @command{./configure} step checks, Gnuastro we check if your 
system has this POSIX thread barriers.
+If so, this macro will have a value of @code{1}, otherwise it will have a 
value of @code{0}.
+see @ref{Implementation of pthread_barrier} for more.
 @end deffn
 
 @cindex 32-bit
@@ -24620,10 +17976,9 @@ of @code{1}, otherwise it will have a value of 
@code{0}. see
 @cindex bit-64
 @deffn Macro GAL_CONFIG_SIZEOF_LONG
 @deffnx Macro GAL_CONFIG_SIZEOF_SIZE_T
-The size of (number of bytes in) the system's @code{long} and @code{size_t}
-types. Their values are commonly either 4 or 8 for 32-bit and 64-bit
-systems. You can also get this value with the expression `@code{sizeof
-size_t}' for example without having to include this header.
+The size of (number of bytes in) the system's @code{long} and @code{size_t} 
types.
+Their values are commonly either 4 or 8 for 32-bit and 64-bit systems.
+You can also get this value with the expression `@code{sizeof size_t}' for 
example without having to include this header.
 @end deffn
 
 @node Multithreaded programming, Library data types, Configuration 
information, Gnuastro library
@@ -25115,7 +18470,7 @@ This function can be used to fill in arrays of numbers 
from strings (in an
 already allocated data structure), or add nodes to a linked list (if the
 type is a list type). For an array, you have to pass the pointer to the
 @code{i}th element where you want the value to be stored, for example
-@code{&(array[i]}).
+@code{&(array[i])}.
 
 If the string was successfully parsed to the requested type, this function
 will return a @code{0} (zero), otherwise it will return @code{1}
@@ -25409,7 +18764,7 @@ on these flags. If @code{input==NULL}, then this 
function will return
 
 
 @deftypefun {gal_data_t *} gal_blank_flag (gal_data_t @code{*input})
-Create a dataset of the the same size as the input, but with an
+Create a dataset of the same size as the input, but with an
 @code{uint8_t} type that has a value of 1 for data that are blank and 0 for
 those that aren't.
 @end deftypefun
@@ -25808,7 +19163,7 @@ simplify the allocation (and later cleaning) of several 
@code{gal_data_t}s
 that are related.
 
 For example, each column in a table is usually represented by one
-@code{gal_data_t} (so it has its own name, data type, units and etc). A
+@code{gal_data_t} (so it has its own name, data type, units, etc). A
 table (with many columns) can be seen as an array of @code{gal_data_t}s
 (when the number of columns is known a-priori). The functions below are
 defined to create a cleared array of data structures and to free them when
@@ -26346,7 +19701,7 @@ in the same order that they are stored. Each integer is 
printed on one
 line. This function is mainly good for checking/debugging your program. For
 program outputs, its best to make your own implementation with a better,
 more user-friendly format. For example the following code snippet. You can
-also modify it to print all values in one line, and etc, depending on the
+also modify it to print all values in one line, etc, depending on the
 context of your program.
 
 @example
@@ -26451,7 +19806,7 @@ the same order that they are stored. Each integer is 
printed on one
 line. This function is mainly good for checking/debugging your program. For
 program outputs, its best to make your own implementation with a better,
 more user-friendly format. For example, the following code snippet. You can
-also modify it to print all values in one line, and etc, depending on the
+also modify it to print all values in one line, etc, depending on the
 context of your program.
 
 @example
@@ -26540,7 +19895,7 @@ the same order that they are stored. Each floating 
point number is printed
 on one line. This function is mainly good for checking/debugging your
 program. For program outputs, its best to make your own implementation with
 a better, more user-friendly format. For example, in the following code
-snippet. You can also modify it to print all values in one line, and etc,
+snippet. You can also modify it to print all values in one line, etc,
 depending on the context of your program.
 
 @example
@@ -26632,7 +19987,7 @@ the same order that they are stored. Each floating 
point number is printed
 on one line. This function is mainly good for checking/debugging your
 program. For program outputs, its best to make your own implementation with
 a better, more user-friendly format. For example, in the following code
-snippet. You can also modify it to print all values in one line, and etc,
+snippet. You can also modify it to print all values in one line, etc,
 depending on the context of your program.
 
 @example
@@ -27317,8 +20672,8 @@ enough, the HDU is also necessary).
 
 Both Gnuastro and CFITSIO have special identifiers for each type that they
 accept. Gnuastro's type identifiers are fully described in @ref{Library
-data types} and are usable for all kinds of datasets (images, table columns
-and etc) as part of Gnuastro's @ref{Generic data container}. However,
+data types} and are usable for all kinds of datasets (images, table columns,
+etc) as part of Gnuastro's @ref{Generic data container}. However,
 following the FITS standard, CFITSIO has different identifiers for images
 and tables. Following CFITSIO's own convention, we will use @code{bitpix}
 for image type identifiers and @code{datatype} for its internal identifiers
@@ -27538,7 +20893,7 @@ This is a very useful function for operations on the 
FITS date values, for
 example sorting FITS files by their dates, or finding the time difference
 between two FITS files. The advantage of working with the Unix epoch time
 is that you don't have to worry about calendar details (for example the
-number of days in different months, or leap years, and etc).
+number of days in different months, or leap years, etc).
 @end deftypefun
 
 @deftypefun void gal_fits_key_read_from_ptr (fitsfile @code{*fptr}, gal_data_t 
@code{*keysll}, int @code{readcomment}, int @code{readunit})
@@ -28276,7 +21631,7 @@ By default, when the dataset only has two values, this 
function will use
 the PostScript optimization that allows setting the pixel values per bit,
 not byte (@ref{Recognized file formats}). This can greatly help reduce the
 file size. However, when @option{dontoptimize!=0}, this optimization is
-disabeled: even though there are only two values (is binary), the
+disabled: even though there are only two values (is binary), the
 difference between them does not correspond to the full contrast of black
 and white.
 @end deftypefun
@@ -28331,7 +21686,7 @@ By default, when the dataset only has two values, this 
function will use
 the PostScript optimization that allows setting the pixel values per bit,
 not byte (@ref{Recognized file formats}). This can greatly help reduce the
 file size. However, when @option{dontoptimize!=0}, this optimization is
-disabeled: even though there are only two values (is binary), the
+disabled: even though there are only two values (is binary), the
 difference between them does not correspond to the full contrast of black
 and white.
 @end deftypefun
@@ -28444,7 +21799,7 @@ points is calculated with the equation below.
 
 @dispmath {\cos(d)=\sin(d_1)\sin(d_2)+\cos(d_1)\cos(d_2)\cos(r_1-r_2)}
 
-However, since the the pixel scales are usually very small numbers, this
+However, since the pixel scales are usually very small numbers, this
 function won't use that direct formula. It will be use the
 @url{https://en.wikipedia.org/wiki/Haversine_formula, Haversine formula}
 which is better considering floating point errors:
@@ -28471,7 +21826,7 @@ list of image coordinates given the input WCS 
structure. @code{coords} must
 be a linked list of data structures of float64 (`double') type,
 see@ref{Linked lists} and @ref{List of gal_data_t}. The top (first
 popped/read) node of the linked list must be the first WCS coordinate (RA
-in an image usually) and etc. Similarly, the top node of the output will be
+in an image usually) etc. Similarly, the top node of the output will be
 the first image coordinate (in the FITS standard).
 
 If @code{inplace} is zero, then the output will be a newly allocated list
@@ -29718,7 +23073,7 @@ indexs. The sorting will be ordered according to the 
@code{values} pointer
 of @code{gal_qsort_index_multi}. Note that @code{values} must point to the
 same place in all the structures of the @code{gal_qsort_index_multi} array.
 
-This function is only useful when the the indexs of multiple arrays on
+This function is only useful when the indexs of multiple arrays on
 multiple threads are to be sorted. If your program is single threaded, or
 all the indexs belong to a single array (sorting different sub-sets of
 indexs in a single array on multiple threads), it is recommended to use
@@ -30660,7 +24015,7 @@ vice-versa. See the description of 
@code{gal_label_watershed} for more on
 @code{indexs}.
 
 Each ``clump'' (identified by a positive integer) is assumed to be
-surrounded by atleast one river/watershed pixel (with a non-positive
+surrounded by at least one river/watershed pixel (with a non-positive
 label). This function will parse the pixels identified in @code{indexs} and
 make a measurement on each clump and over all the river/watershed
 pixels. The number of clumps (@code{numclumps}) must be given as an input
@@ -30680,11 +24035,11 @@ tile/value will be associated to each clump based on 
its flux-weighted
 (only positive values) center.
 
 The main output is an internally allocated, 1-dimensional array with one
-value per label. The array information (length, type and etc) will be
+value per label. The array information (length, type, etc) will be
 written into the @code{sig} generic data container. Therefore
 @code{sig->array} must be @code{NULL} when this function is called. After
-this function, the details of the array (number of elements, type and size
-and etc) will be written in to the various components of @code{sig}, see
+this function, the details of the array (number of elements, type and size,
+etc) will be written in to the various components of @code{sig}, see
 the definition of @code{gal_data_t} in @ref{Generic data
 container}. Therefore @code{sig} must already be allocated before calling
 this function.
@@ -33104,7 +26459,7 @@ the development of Gnuastro, so please adhere to the 
following guidelines.
 The body should be very descriptive. Start the commit message body by
 explaining what changes your commit makes from a user's perspective (added,
 changed, or removed options, or arguments to programs or libraries, or
-modified algorithms, or new installation step, or etc).
+modified algorithms, or new installation step, etc).
 
 @item
 @cindex Mailing list: gnuastro-commits
@@ -33167,7 +26522,7 @@ Workflow:
 @item
 You can send commit patches by email as fully explained in `Pro Git'. This
 is good for your first few contributions. Just note that raw patches
-(containing only the diff) do not have any meta-data (author name, date and
+(containing only the diff) do not have any meta-data (author name, date,
 etc). Therefore they will not allow us to fully acknowledge your contributions
 as an author in Gnuastro: in the @file{AUTHORS} file and at the start of
 the PDF book. These author lists are created automatically from the version
@@ -33182,7 +26537,7 @@ try the next solution.
 
 @item
 You can have your own forked copy of Gnuastro on any hosting site you like
-(GitHub, GitLab, BitBucket, or etc) and inform us when your changes are
+(GitHub, GitLab, BitBucket, etc) and inform us when your changes are
 ready so we merge them in Gnuastro. This is more suited for people who
 commonly contribute to the code (see @ref{Forking tutorial}).
 
diff --git a/doc/release-checklist.txt b/doc/release-checklist.txt
index 5de5ad0..b176738 100644
--- a/doc/release-checklist.txt
+++ b/doc/release-checklist.txt
@@ -16,6 +16,14 @@ all the commits needed for this release have been completed.
              > ~/gnuastro_book_new_parts.txt
 
 
+ - [STABLE] Check if THANKS, and the book's Acknowledgments section have
+   everyone in `doc/announce-acknowledge.txt' in them. To see who has been
+   added in the `THANKS' file since the last stable release (to add in the
+   book), you can use this command:
+
+       $ git diff gnuastro_vP.P..HEAD THANKS
+
+
  - Build the Debian distribution (just for a test) and correct any build or
    Lintian warnings. This is recommended, even if you don't actually want
    to make a release before the alpha or main release. Because the warnings
@@ -44,14 +52,6 @@ all the commits needed for this release have been completed.
  - Check if README includes all the recent updates and important features.
 
 
- - [STABLE] Check if THANKS, and the book's Acknowledgments section have
-   everyone in `doc/announce-acknowledge.txt' in them. To see who has been
-   added in the `THANKS' file since the last stable release (to add in the
-   book), you can use this command:
-
-     $ git diff gnuastro_vP.P..HEAD THANKS
-
-
  - Run the following commands to keep the list of people who contributed
    code and those that must be acknowledged for the announcement (`P.P' is
    the previous version).
diff --git a/lib/label.c b/lib/label.c
index 754555e..895811d 100644
--- a/lib/label.c
+++ b/lib/label.c
@@ -914,10 +914,11 @@ gal_label_grow_indexs(gal_data_t *labels, gal_data_t 
*indexs, int withrivers,
   /* The basic idea is this: after growing, not all the blank pixels are
      necessarily filled, for example the pixels might belong to two regions
      above the growth threshold. So the pixels in between them (which are
-     below the threshold will not ever be able to get a label). Therefore,
-     the safest way we can terminate the loop of growing the objects is to
-     stop it when the number of pixels left to fill in this round
-     (thisround) equals the number of blanks.
+     below the threshold will not ever be able to get a label, even if they
+     are in the indexs list). Therefore, the safest way we can terminate
+     the loop of growing the objects is to stop it when the number of
+     pixels left to fill in this round (thisround) equals the number of
+     blanks.
 
      To start the loop, we set `thisround' to one more than the number of
      indexed pixels. Note that it will be corrected immediately after the
diff --git a/lib/options.c b/lib/options.c
index 5d9c39a..d29b32f 100644
--- a/lib/options.c
+++ b/lib/options.c
@@ -582,7 +582,7 @@ gal_options_parse_list_of_numbers(char *string, char 
*filename, size_t lineno)
   size_t minmapsize=-1;
 
   /* If we have an empty string, just return NULL. */
-  if(*string=='\0') return NULL;
+  if(string==NULL || *string=='\0') return NULL;
 
   /* Go through the input character by character. */
   while(string && *c!='\0')
@@ -1139,24 +1139,34 @@ gal_options_parse_name_and_values(struct argp_option 
*option, char *arg,
 
       /* Read the values and write the name. */
       dataset=gal_options_parse_list_of_numbers(values, filename, lineno);
-      dataset->name=name;
 
-      /* Add the given dataset to the end of an existing dataset. */
-      existing = *(gal_data_t **)(option->value);
-      if(existing)
+      /* If there actually was a string of numbers, then do the rest. */
+      if(dataset)
         {
-          for(tmp=existing;tmp!=NULL;tmp=tmp->next)
-            if(tmp->next==NULL) { tmp->next=dataset; break; }
+          dataset->name=name;
+
+          /* Add the given dataset to the end of an existing dataset. */
+          existing = *(gal_data_t **)(option->value);
+          if(existing)
+            {
+              for(tmp=existing;tmp!=NULL;tmp=tmp->next)
+                if(tmp->next==NULL) { tmp->next=dataset; break; }
+            }
+          else
+            *(gal_data_t **)(option->value) = dataset;
+
+          /* For a check.
+             printf("arg: %s\n", arg);
+             darray=dataset->array;
+             for(i=0;i<dataset->size;++i) printf("%f\n", darray[i]);
+             exit(0);
+          */
         }
       else
-        *(gal_data_t **)(option->value) = dataset;
-
-      /* For a check.
-      printf("arg: %s\n", arg);
-      darray=dataset->array;
-      for(i=0;i<dataset->size;++i) printf("%f\n", darray[i]);
-      exit(0);
-      */
+        error(EXIT_FAILURE, 0, "`--%s' requires a string of numbers "
+              "(separated by `,' or `:') following its first argument, "
+              "please run with `--help' for more information",
+              option->name);
 
       /* Our job is done, return NULL. */
       return NULL;
diff --git a/lib/statistics.c b/lib/statistics.c
index 6a400e3..c942126 100644
--- a/lib/statistics.c
+++ b/lib/statistics.c
@@ -2049,9 +2049,19 @@ gal_statistics_sigma_clip(gal_data_t *input, float 
multip, float param,
              out of the loop. Normally, `oldstd' should be larger than std,
              because the possible outliers have been removed. If it is not,
              it means that we have clipped too much and must stop anyway,
-             so we don't need an absolute value on the difference! */
-          if( bytolerance && num>0 && ((oldstd - *std) / *std) < param )
-            { gal_data_free(meanstd); gal_data_free(median_d); break; }
+             so we don't need an absolute value on the difference!
+
+             Note that when all the elements are identical after the clip,
+             `std' will be zero. In this case we shouldn't calculate the
+             tolerance (because it will be infinity and thus lager than the
+             requested tolerance level value).*/
+          if( bytolerance && num>0 )
+            if( *std==0 || ((oldstd - *std) / *std) < param )
+              {
+                if(*std==0) {oldmed=*med; oldstd=*std; oldmean=*mean;}
+                gal_data_free(meanstd);   gal_data_free(median_d);
+                break;
+              }
 
           /* Clip all the elements outside of the desired range: since the
              array is sorted, this means to just change the starting



reply via email to

[Prev in Thread] Current Thread [Next in Thread]