[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: unibyte<->multibyte conversion [Re: Emacs-diffs Digest, Vol 2, Issue

From: Kenichi Handa
Subject: Re: unibyte<->multibyte conversion [Re: Emacs-diffs Digest, Vol 2, Issue 28]
Date: Tue, 21 Jan 2003 17:04:37 +0900 (JST)
User-agent: SEMI/1.14.3 (Ushinoya) FLIM/1.14.2 (Yagi-Nishiguchi) APEL/10.2 Emacs/21.2.92 (sparc-sun-solaris2.6) MULE/5.0 (SAKAKI)

In article <address@hidden>, "Stefan Monnier" <monnier+gnu/address@hidden> 
>>  unibyte sequence (hex): 81    81    C0    C0
>>                          result of conversion    display in multbyte buffer
>>  string-as-multibyte:    9E A1 81    C0    C0    \201À\300
>>  string-make-multibyte:  9E A1 9E A1 81 C0 81 C0 \201\201ÀÀ
>>  string-to-multibyte:    9E A1 9E A1 C0    C0    \201\201\300\300

> I find the terminology and the concepts confusing.

I agree that those names are not that intuitive, but the
first two were there before I noticed it.  :-p
But, in what sense, the concepts are confusing?

> On the other hand, I understand the concept of encoding and decoding.
> The following equivalences almost hold:

>  (string-as-multibyte str) == (decode-coding-string str 'internal)
>  (string-make-multibyte str) == (decode-coding-string str 'default)
>  (string-to-multibyte str) == (decode-coding-string str 'raw-text)

> I said "almost" because:

Please note that decode-coding-string also does eol
conversion.  Using 'internal-unix, 'default-unix,
'raw-text-unix will make them more equivalent.

> 1 - there is no `internal' coding-system as of now.  In Emacs-21 we'd
>     use `emacs-mule' but for Emacs-22 it would be `utf-8-emacs'.
>     I'm still not sure what such a thing is useful for, tho (see
>     my other email).

Before we introduced eight-bit-XXXX,
  (insert (string-as-multibyte UNIBYTE-STRING))
was the only way to preserve the original byte sequence in a
multibyte buffer.

But, as we now have eight-bit-XXXX, I agree that
string-as-multibyte is not that useful, string-to-multibyte
is better.

> 2 - there is no `default' coding-system either.  Or maybe
>     locale-coding-system is this default: if your locale is
>     latin-1 then that's latin-1.

If one does not do set-language-enviroment,
locale-coding-system can be used as `default'.

>     For non-8-bit locales, I don't know what
>     string-make-multibyte does.

In that case, it does latin-1 decoding, ... yes, not that good.

> 3 - when called with a `raw-text' coding-system, decode-coding-string
>     returns a unibyte string, which is obviously not what we want here.
>     It might make sense for internal operations to return unibyte
>     strings for the `raw-text' case, but I was really surprised that
>     decode-coding-string would ever return a unibyte string.

I tend to agree that it is better that decode-coding-string
always return a multibyte string now.

> I think avoiding string-FOO-multibyte and using decode-coding-string
> instead would make things a lot more clear.

I think string-FOO-multibyte (and also string-FOO-unibyte)
are conceptually different from decoding (and encoding)
operations.  It's difficult for me to explain it clearly,
but I'll try.

Decoding and encoding are interface between Emacs and the
outer world.

Decoding is for converting an external byte sequence
(i.e. belonging to a world out of Emacs) into Emacs'

Encoding is for converting Emacs' represenatation to a byte
sequence that is used out of Emacs.

But string-FOO-multi/unibyte are convesion within Emacs'

And, if one wants to insert a result of encode-coding-string
in a multibyte buffer (perhaps for some post-processing),
what he should do?  If we have string-to-multibyte, we can
do this:
   (insert (string-to-multibyte
             (encode-coding-string MULTIBYTE-STRING CODING)))
If we don't have it, and provided that decode-coding-string
always returns a multibyte string, we must do:
   (insert (decode-coding-string
             (encode-coding-string MULTIBYTE-STRING CODING) 'raw-text-unix))
Isn't it very funny?

By the way, I think the culprit of the current problem is
this Emacs' doctrine:
    Do unibyte<->mutibyte conversion by "MAKE" by default.

Although this doctrine surely works for handling unibyte and
multibyte represenation transparently, it makes Elisp
programmers very very confused.  And it is useful only for
people whose main charset is single-byte.

I seriously considering changing it in emacs-unicode.

Ken'ichi HANDA

reply via email to

[Prev in Thread] Current Thread [Next in Thread]