[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Understanding filter function calls

From: Karthik Chikmagalur
Subject: Understanding filter function calls
Date: Sun, 23 Jul 2023 22:46:09 -0700

I'm testing some code written for the upcoming Org 9.7 for previewing
LaTeX fragments.  Given a LaTeX environment (or many), the way the code
works is:

1. Gather fragments in the buffer

2. Create a TeX file containing this LaTeX

3. Run latex (TeX -> DVI)

4. Run dvisvgm in the LaTeX process sentinel
  (DVI -> SVG or series of SVGs)

5. Update in-buffer previews as SVGs are generated through the dvisvgm
process' filter function.

We use a filter function for the dvisvgm process to incrementally update
previews as this is much faster on larger runs than waiting for the
process sentinel to run.

This is different from how LaTeX preview generation has worked in Org
mode so far.  It's as asynchronous as can be, and somewhat similar to
how preview-latex (part of AucTeX works), if you're familiar with that.

The problem is that how long preview generation takes is significantly
different for different TeXLive versions (i.e. different LaTeX/dvisvgm
executables).  For example, LaTeX preview generation in an Org file with
~600 fragments takes:

|              | Preview generation time |
| TeXLive 2022 | 2.65 secs               |
| TeXLive 2023 | 4.03 secs               |

This is with identical code on the Emacs side of things.

This difference is NOT explainable as the newer versions of LaTeX or
dvisvgm taking longer.  When benchmarked individually on the same TeX
file -- and without Emacs in the picture -- latex 2022 and latex 2023
(as I'll call them here) take near identical times, as do dvisvgm 2022
and dvisvgm 2023.

|              | latex run       | dvisvgm run  |
| TeXLive 2022 | 253.9 ± 10.6 ms | 1266 ± 41 ms |
| TeXLive 2023 | 258.9 ± 15.0 ms | 1298 ± 15 ms |

The stdout from latex and dvisvgm, which the sentinel/filter functions
parse, are near identical, and the SVG images are the same sizes.  I've
controlled every variable I could think to control.

- Same Org file.
- Same org-latex-preview customizations/settings.
- Same Emacs buffers open, etc.
- Run `garbage-collect' immediately before benchmarking.
- Same background system processes.

So why is the TeXLive 2023 run so much slower in Emacs?

After profiling with elp and generating a flamegraph, this is the result
(png image): https://abode.karthinks.com/share/olp-timing-chart.png

You can ignore the disproportionately long duration function calls,
this is related to GC, and one additional GC phase during the slower
(TeXLive 2023) case cannot explain the discrepancy.  Further, sometimes
there are more GC phases in the TeXLive 2022 run, but it's still
significantly faster.

The overall synchronous run times of the code in Emacs for TeXLive 2022
and 2023 are similar (0.77 vs 0.84 s). However the dvisvgm filter
function is called quite differently in the two cases.

|              | call count | total time | average time |
| TeXLive 2022 |         25 | 0.77 secs  | 31 ms        |
| TeXLive 2023 |         39 | 0.84 secs  | 22 ms        |

Even though the overall time spent in the filter function is about the
same, the TeXLive 2023 dvisvgm run

- calls the filter function 39 times instead of 25, where
- each run of the filter function processes less stdout,
- each run of the filter function takes less time to run,
- and crucially, these filter function calls are spaced slightly further
  apart in time

The net result of which is that the overall preview generation process
takes much longer to complete.

I've tested this multiple times over runs (Org/TeX files) of different
sizes, and the pattern is the same.

My questions are thus:

1. If the latex/dvisvgm executables from TeXLive 2022/2023 take about
the same time, and the stdout (that the filter function sees) is
identical, why is Emacs' filter function call behavior different?

2. Is there anything I can do to obtain time signature/behavior like
with TeXLive 2022 (see above link)?  I would really like to speed up
preview generation.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]