emacs-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Summary (Re: A system for localizing documentation strings)


From: Jean-Christophe Helary
Subject: Re: Summary (Re: A system for localizing documentation strings)
Date: Sat, 28 Jul 2007 00:41:54 +0900


On 28 juil. 07, at 00:07, Jan Djärv wrote:

Jean-Christophe Helary skrev:
On 27 juil. 07, at 17:03, Jan Djärv wrote:
gettext has nothing to do with emacs. emacs code, as elisp, is data and must be considered as such by specific elisp functions created for localization.

Why?  AFAIK, gettext works with other interpreted languages also.

This is besides the point.

gettext offer tools to extract the strings, deliver PO(T) files, merge the strings, compile the PO and then the result has to be built against a l10n library to handle the MO live.

There is nothing interactive in the process and all the extraction/ conversion/merging/compilation etc are handled by an external process because most of the localized application are not specialized at handling strings in general. Some are but that is not relevant.

Emacs is a highly specialized device that handles strings. Why bother externalize the processes when they could be handled in a cleaner way from within emacs, in a few elisp functions, including translation of course.

There is absolutely no point in using a non lisp way to extend emacs (because we are talking about extending emacs here so that any elisp system can be localized within emacs).

Any?  Are there more than one?

Any extension to emacs is an elisp system. Emacs is an environment that runs that system. One can see such systems as emacs extensions but developers can (and do) chose elisp and emacs because their code will then be highly portable. It is the fabled "write once run everywhere", in lisp.

2) a way for translators to add translated strings "the emacs way"

We don't need that. Translators are organized in teams, and everything uses gettext. We do not want a different mechanism here. Many translators translates programs they very rarely, if ever, use. It would be a mistake to
not use gettext.
Of course we need that. We are trying to extend emacs so that it provides a localization framework for elisp systems.

No, we are trying to localize Emacs. Or that is what I am talking about.

Sure, but the thread did not begin specifically as a suggestion to localize emacs, that was a side effect.

The purpose of the original mail (it was not clear and I apologize for that) was the following:

1) by providing a localization framework in elisp for elisp (including translation aiding functions) 2) we open the door to emacs localization and documentation translation from within emacs _and_ in elisp 3) that contributes to the adoption of emacs in native language communities
4) to the adoption of elisp in the same communities
5) to introducing localizers and translators to the fantastic tool that is emacs 6) to providing a more general translation framework in emacs that can be used for localizing and translating in a number of formats besides for elisp code

etc.

I'd say that it would be much more straightforward to use an elisp system to do that than to externalize the tasks to gettext.

Elisp is only part of Emacs.  C code must also be handeled.

Sure, I already replied to that. Parts that can't be handled with elisp would _have_ to be handled in gettext. But the C parts in emacs are few in number and do not evolve as fast as the elisp parts.

Besides, translator teams use different tools for different tasks and there are plenty of software that does not use gettext because translating PO files is a pain in the butt: none of the tools at hand offer modern mechanisms to efficiently leverage translation compendia dynamically.

There are many tools out there, I haven't checked them all. But I know some use translation dictionaries. AFAIK, GNU uses gettext only for localization.

It uses gettext to extract strings and merge back the translations. Localization is handled by the PO editors, yet another externalization that could be handled by emacs if a proper translation function set were provided (I already mentioned dynamic string matching to reference files, unlike what gettext does with fuzzies, which is static and removes the original context).

gettext is only the mechanism that puts everything back together. It is robust but not conceived with the translator in mind at all.

3) a way for users to have the localized strings displayed

Gettext does that already. I guess we have to modify print or some low level
C function in Emacs to use gettext.
And there are ways to do that from within emacs with elisp. Why externalize things when we have an extendable framework that is yearning to be extended ?

Why reinvent the wheel when there is a localization framework in use already?

Because the already existing framework is all-encompassing why we only need a small subset of its functions. Emacs can identify the translatable strings, it can display them, it can register their translations it can compile the data and evaluate it dynamically. Why bother using an external process when everything can be done more elegantly internally ?

Jean-Christophe Helary





reply via email to

[Prev in Thread] Current Thread [Next in Thread]