[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: XeTeX encoding problem

From: Gavin Smith
Subject: Re: XeTeX encoding problem
Date: Sun, 17 Jan 2016 16:18:16 +0000

On 17 January 2016 at 15:27, Masamichi HOSODA <address@hidden> wrote:

> I have another solution.
> The sample patch is attached to this mail.
> Unicode fonts are not required. (default Computer Modern is used.)
> Byte wise input is *NOT* used.
> Unicode glyphs (U+00FC etc.) can be used.
> How about this?

If I understand correctly, you are changing the category codes of the
Unicode characters when writing out to an auxiliary file, but only for
those Unicode characters that are defined. This leads the Unicode
character to be written out as a UTF-8 sequence. For the regular
output, the definitions given with \DeclareUnicodeCharacter are used
instead of trying to get a glyph for the Unicode character from a
font. If there's no definition given, then the character must be in
the font.

I don't know why you did it this way; maybe you could explain? Or if
my explanation above is incorrect, could you correct it?

There is a potential problem with changing the category codes of the
Unicode characters, in that any tokens that have already been read in
won't be affected, depending on the implementation. For example, with

@chapter é,

whether this works depends on whether the argument "é" was read before
or after the category codes changed. It would be less fragile to keep
the characters as active but make them expand to a token with category
code "other".

Using the character definitions built in to texinfo.tex with
\DeclareUnicodeCharacter may give less good results than using the
glyphs from a proper Unicode font.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]