[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: unicode characters > FFFF
From: |
Simon Josefsson |
Subject: |
Re: unicode characters > FFFF |
Date: |
Mon, 09 Jan 2006 16:28:21 +0100 |
User-agent: |
Gnus/5.110004 (No Gnus v0.4) Emacs/22.0.50 (gnu/linux) |
Alexander Gnauck <address@hidden> writes:
> Hello,
Hi Alexander!
> im currently working on autogenerate tools for the tables in c# for
> the c# port of libIDN.
Thanks!
> The c# port has currently the same limitations as the Java Code. Simon
> asked me if this is smth that could be fixed. But i don't think
> so. The c# char's size is 16bit with the range U+0000 - U+FFFF.
> see:
> http://msdn.microsoft.com/library/default.asp?url=/library/en-us/csref/html/vclrfChar_PG.asp
I see. This is the same problem that earlier versions of Java had.
Perhaps they will fix it in C# too.
> Are there any tests cases with unicode characters > U+FFFF?
For example:
IDNA(<U+10205><U+00ed>dn.example) = xn--dn-mja7734x.example
See also RFC 3454, especially table D.2.
> How is this handled in the C version?
I use uint32_t as the Unicode code point data type, it can store 32
bits.
Good luck,
Simon