[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: XeTeX encoding problem

From: Gavin Smith
Subject: Re: XeTeX encoding problem
Date: Fri, 15 Jan 2016 15:32:45 +0000

On 15 January 2016 at 15:19, Masamichi HOSODA <address@hidden> wrote:
>>>     (something like ``Table of Contents'' broken etc.)
>>> That can be fixed in other ways, without resorting to native UTF-8.
>> I agree.
> In the case of LuaTex, exactly, it can be fixed.
> In the case of XeTeX, unfortunately,
> it cannot be fixed if I understand correctly.

I think it could be done by changing the active definitions of bytes
128-256 when writing to an auxiliary file to read a single Unicode
character and write out an ASCII sequence that represents that
character, probably involving the @U command. Do you know how to do

> Yes, CJK fonts are required.
> For example, if you want to use Japanese characters,
> I think that it is possible to set the Japanese font in txi-ja.tex.
> However, if the native Unicode support is disabled,
> the Japanese characters cannot be used in this way.

Good idea to put the font loading in the translation files.

>> I don't think we should use my previous idea of only using native
>> UTF-8 support if "@documentencoding UTF-8" is not given. I thought it
>> was a neat idea but I can see that some people would find it
>> confusing.
> In the case of texi2html, "@documentencoding UTF-8" should be given.
> Most html browsers recognize "charset=utf-8" in the generated html files
> and use something like native Unicode support.
> Therefore Japanese characters can be used.
> In the case of texi2pdf,
> it is necessary that the same texi files can be used,
> in my humble opinion.

OK, HTML output is a consideration as well.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]