texinfo-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Using Perl's cc


From: Eli Zaretskii
Subject: Re: Using Perl's cc
Date: Sat, 04 Jul 2015 21:19:51 +0300

> Date: Sat, 4 Jul 2015 18:32:33 +0100
> From: Gavin Smith <address@hidden>
> 
> >    #   include <netinet/in.h>
> >                              ^
> >   compilation terminated.
> >   Makefile:448: recipe for target `XSParagraph_la-XSParagraph.lo' failed
> >   make[4]: *** [XSParagraph_la-XSParagraph.lo] Error 1
> >
> > This is the incompatibility between MSYS and MinGW toolchains: the
> > latter doesn't have netinet/in.h.  I could try hacking CORE/config.h
> > to undefine the corresponding CPP guard, but I think it would be
> > better to provide a way for the user to specify, at configure time,
> > where to look for the "right" Perl installation and for the header
> > files belonging to that installation, in this case the native build of
> > Perl.  That is, if you still want to try to pursue this, and see if
> > the extension can be successfully built on Windows.
> 
> Yes, if it cannot be determined automatically a configure option would
> be necessary.

Meanwhile I looked around, and it sounds like all that's needed is
something like --with-perl-dir=/path/to/perl/installation/directory,
and perhaps a small change to how fetch_conf.pl is invoked (it should
no longer run "/usr/bin/env perl", but the one in the specified
directory's bin/ subdirectory).  Moreover, it sounds like ActiveState
Perl, which is what I have here, already supports MinGW automagically
(it returned the correct file name for 'cc' config option, and I
cannot see how it could do that without examining my system's PATH).

An important subtle issue is that Perl thus specified should only be
used for figuring out compilation options for the extensions and for
building/trying the extensions themselves, not for the rest of the
build unrelated to the extensions.

> I wrote all of xspara.c myself - it doesn't come from Perl.

Good, so I can hack it ;-)

> It relies on a UTF-8 codeset being in the locale to be able to use the
> C standard library functions to operate on UTF-8 data, like mbrtowc.
> The UTF-8 data is coming from the Perl instance. (Perl strings have
> two possible internal encodings: one is UTF-8, the other is either
> Latin-1 or "native". The second's not reliable so I forced the UTF-8
> representation.) If we can't do that, then it shouldn't be a big
> problem to write or copy from elsewhere code to process UTF-8 data,
> because the encoding isn't that complicated.

Can we use wchar_t instead?  Windows does support that out of the box,
and a few functions that are absent, like wcwidth, can be easily
written or emulated.  What's important, Windows' wchar_t type uses
UTF-16 encoded Unicode codepoints, so all that's needed is conversion
from and to UTF-8.  On GNU/Linux, wchar_t is a 32-bit data type that
carries the Unicode codepoints themselves, so again just two-way
conversion will be needed.

> Another problem would be the use of functions that operate on wide
> characters: iswupper, iswspace, and wcwidth. It'd be too much to
> replicate these completely.

See above: Windows already has most of them.  Just try to avoid using
those that are not defined by ANSI C, they might be unavailable (but
could be provided if really needed).



reply via email to

[Prev in Thread] Current Thread [Next in Thread]