bug-texinfo
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: XeTeX encoding problem


From: Gavin Smith
Subject: Re: XeTeX encoding problem
Date: Sun, 7 Feb 2016 11:19:24 +0000

On 31 January 2016 at 13:25, Masamichi HOSODA <address@hidden> wrote:
>> If the empty lines are really the cause, I agree that it deserves a
>> separate commit since it doesn't seem to be related to the encoding
>> problem.
>
> The issue occurs in native Unicode only.
>
> If native Unicode is enabled,
> \nativeunicodechardefsthru may be used at the page break.
> It has \unicodechardefs (renamed from \utfchardefs in my patch).
> Extra empty lines cause infinite loop.
>
> If @documentencoding is US-ASCII or ISO-8859-1, it does not occur.
> In this case, \nativeunicodechardefsthru is not used.
> \nonasciistringdefs is used instead.
> It does not have extra empty lines.

I understand: it's due to the use of \normalturnoffactive in the
output routine (in \onepageout).

I have a different suggestion for fixing this issue: execute
\unicodechardefs only once in each run, and make the expansion of each
character use a condition. The value of the condition can be changed
to control what the characters do without redefining all of the
characters.

The same could be done for \nonasciistringdefs. I was thinking of
making this change before when I was looking at an log of macro
expansion and was scrolling past many lines that resulted from the
redefinitions of non-ASCII characters.

The only downside would be the slight overhead of evaluating the
conditional, but I expect in most cases avoiding the redefinitions
would be more efficient, especially considering there are 100's of
them for Unicode and it's done at every page break. My intuition says
that making a new definition in TeX would be more expensive than
evaluating an extra conditional for each non-ASCII character.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]