bug-texinfo
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: XeTeX encoding problem


From: Masamichi HOSODA
Subject: Re: XeTeX encoding problem
Date: Sat, 16 Jan 2016 00:19:07 +0900 (JST)

>> By switching to native UTF-8, the support in texinfo.tex for characters
>> outside the base font is lost, as far as I can see.  Yes, you get some
>> characters "for free" (the ones in the lmodern*.otf fonts now being
>> loaded instead of the traditional cm*) but you also lose some characters
>> (the ones that aren't in lmodern).
> 
> That's quite a major problem, I think. I didn't realise that so many
> characters would be missing - this negates much of the benefit of
> using native Unicode support. Is there really no font that aims to
> include every single Unicode character?

Is single Unicode character
Basic Latin U+0020 - U+007E and
Latin-1 Supplement U+00A0 - U+00FF ?

It seems that Linux Libertine O has those glyphs.

>>     (something like ``Table of Contents'' broken etc.)
>>
>> That can be fixed in other ways, without resorting to native UTF-8.
> 
> I agree.

In the case of LuaTex, exactly, it can be fixed.
In the case of XeTeX, unfortunately,
it cannot be fixed if I understand correctly.

>>     CJK characters can not be used without native UTF-8 support.
>>
>> They still won't work without loading a font that has them (at the right
>> time, without interfering with other fonts already loaded, etc.).  Not
>> simple.  There are no CJK characters in lmodern, unless I'm totally
>> missing them.

Yes, CJK fonts are required.
For example, if you want to use Japanese characters,
I think that it is possible to set the Japanese font in txi-ja.tex.
However, if the native Unicode support is disabled,
the Japanese characters cannot be used in this way.

>> Anyway, it's up to Gavin whether to install your patch.  I don't have
>> strong feelings about it.  Just pointing out that there are both gains
>> and losses.
> 
> It would be fine as an option. If it's substandard in its glyph
> support there's always the chance of improvements later.
> 
> That said, if there's a fix for the table of contents issue, maybe the
> desire for native UTF-8 support will go away.

If it is your decision, I'm OK.
However, I want native Unicode support even if it is an option.

> I don't think we should use my previous idea of only using native
> UTF-8 support if "@documentencoding UTF-8" is not given. I thought it
> was a neat idea but I can see that some people would find it
> confusing.

In the case of texi2html, "@documentencoding UTF-8" should be given.
Most html browsers recognize "charset=utf-8" in the generated html files
and use something like native Unicode support.
Therefore Japanese characters can be used.

In the case of texi2pdf,
it is necessary that the same texi files can be used,
in my humble opinion.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]