[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: XeTeX encoding problem

From: Werner LEMBERG
Subject: Re: XeTeX encoding problem
Date: Sat, 16 Jan 2016 11:20:33 +0100 (CET)

>> Well, they *could* be.  We could choose a font with CJK support and
>> make the definitions in texinfo.tex just as we define existing
>> chars.  In principle it is possible to make definitions for any and
>> all Unicode characters in texinfo.tex.  -k
> I believe there would be complications. If one character is defined
> to {\arabicfont A} and another to {\arabicfont B}, then with
> {\arabicfont A}{\arabicfont B}, I believe there wouldn't be proper
> ligaturization between the two characters.  Also with TeX each font
> can only have 256 glyphs.  I believe the CJK package for LaTeX deals
> with this problem.

My CJK package heavily relies on the LaTeX font selection, and I doubt
that this gets ported to texinfo.  If I had to do it, I would refuse
it since it is an obsolete technique not really adequate to write
documents having a CJK script as the main script.  And the solutions
of the CJK package completely fail for other scripts like Arabic,
Devenagari, etc., due to ligatures, as you correctly state.

Instead, I would like to have the ucharclasses style file (for XeTeX)
ported to texinfo (also part of TeXLive, BTW).


It should also be ported to luatex so that Unicode blocks
automatically access associated fonts.

But this is the future.  Right now, I favor a simple solution, namely
native UTF8 support using the CM super fonts, even if there are
missing characters (which ones, BTW?).  I guess this covers 99% of the
current need.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]