axiom-developer
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Axiom-developer] Re: [Gcl-devel] GCL on mingw


From: Vadim V. Zhytnikov
Subject: [Axiom-developer] Re: [Gcl-devel] GCL on mingw
Date: Wed, 10 Dec 2003 20:30:20 +0300
User-agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; ru-RU; rv:1.5) Gecko/20031006

Camm Maguire ?????:

Hi Vadim!

"Vadim V. Zhytnikov" <address@hidden> writes:


Hi!

Trying to build recent GCL 2.6.1 on mingw I encountered


  ^^^^^^^^^^^^^^^

Great!  I was just going to ask you if you had a mingw development
system to work with given your earlier mingw problem report.


Well, I just followed Mike's readme.mingw instruction.
The key point is additional (si::use-fast-links nil) in
pcl/makefile.  He also apparently recommends --enable-custreloc
but I was able to build  ANSI GCL with and without this option.
On the other hand I did it with some gcl 2.6.1 snapshot not
with very last CVS sources - I have to try it once again.


some strange problem on configure stage.
The following test fails
===========================================================
echo $ac_n "checking sizeof struct contblock""... $ac_c" 1>&6
echo "configure:3238: checking sizeof struct contblock" >&5
if test "$cross_compiling" = yes; then
  echo Cannot find sizeof struct contblock;exit 1
else
  cat > conftest.$ac_ext <<EOF
#line 3243 "configure"
#include "confdefs.h"
#include <stdio.h>
        #define EXTER
        #include "$MP_INCLUDE"
        #include "`pwd`/h/enum.h"
        #include "`pwd`/h/object.h"
        int main(int argc,char **argv,char **envp) {
        FILE *f=fopen("conftest1","w");
        fprintf(f,"%u",sizeof(struct contblock));
        fclose(f);
        return 0;
        }
EOF
===========================================================
Trouble makes are these two lines
        #include "`pwd`/h/enum.h"
        #include "`pwd`/h/object.h"
Due to some reason under mingw
        #include "/home/vadim/gcl/h/enum.h"
signals an error: File not found.
I really don't understand such strange behavior
since ls /home/vadim/gcl/h/enum.h works fine.
But maybe we just can replace these two lines by
        #include "h/enum.h"
        #include "h/object.h"


I'll look into making this change.  Unfortunately, write access to cvs
at savannah is still down.  Please remind me if I forget by the time
it is restored.

With such modification I was able to build
ANSI GCL on mingw.  My goal was to test strange


  ^^^^

Really?  You built pcl?  We definitely need the details here if so, as
Mike has experienced problems getting through this stage, and has had
to use precompiled .c source to ship his binary ansi package.

BTW, I do suspect the problem you report and Mike's build problem
stems from the same source.


See above.


memory-related GCL crashes under mingw.
I tried various memory allocation tests -
exactly the same I used on Linux (see e.g.
atest.lisp in attachment).  In general
results are practically the same on both
platforms with one important exception.
While on Linux I maximally can use 110K pages
(MAXPAGES=128K) on mingw all attempts
to allocate more than ~62000 pages
causes allocation error.  GCL self terminates
with the message:
Unrecoverable error: Can't allocate



My suspicion is that the heap is growing into some memory area already
in use for something else, e.g. shared libs.  I think Mike is away at
the moment, but I had previously requested the following from him,
which you may now be able to provide for me:

1) value of the configure determined define DBEGIN
2) on building a gcl with --enable-debug, run under gdb, breaking at
   main, and report the value of 'p sbrk(0)'
3) break at init_lisp, stop at this line:

        if (NULL_OR_ON_C_STACK(&j) == 0
            || NULL_OR_ON_C_STACK(Cnil) != 0
            || (((unsigned long )core_end) !=0
                && NULL_OR_ON_C_STACK(core_end) != 0))
          { /* check person has correct definition of above */
            error("NULL_OR_ON_C_STACK macro invalid");
          }

   and report the values returned by 'p &j', 'p &Cnil_body', and 'p
   core_end'.

4) Try to let me know if the C stack counts up or down.  I.e. break in
   some function with a local variable defined, and print the address
   of that variable, and compare it to the address of a local variable
   defined in a surrounding function (i.e. a parent function).

We have the following somewhat less than robust code currently in
place for MINGW (main.c):

#ifdef _WIN32
unsigned int _dbegin = 0x10100000;
unsigned int _stacktop, _stackbottom;
#endif

#ifdef _WIN32
          {
            unsigned int dummy;
_stackbottom = (unsigned int ) &dummy;
            _stacktop    = _stackbottom - 0x10000; // ???

          }
#endif


So from this, sbrk(0) should begin at around 0x10100000, and the stack
should count down from some unknown (to me at least) address region.
Please try to verify this and fill in the holes.  It would be great to
firm this up, particularly the hardcoded stack area limit.

Then, if Mingw has some analog of ldd or /proc/$pid/maps, please
report their contents/output to me on the image running under gdb.
I.e. 'ldd saved_gcl' and 'cat /proc/(process id of saved_gcl)/maps'.

As far as I know there is no analog of /proc/../maps on mingw.


I'm assuming the error message you saw was:

        IF_ALLOCATE_ERR error("Can't allocate.  Good-bye!");


Right.

(There are a few other error messages beginning with 'Can't
allocate').  If so, my guess is that sbrk has hit a large jump.  We
have another somewhat ad hoc piece of code in place for mingw at
present  (mingw.h):

#define IF_ALLOCATE_ERR \
        if (core_end != sbrk(0))\
         {char * e = sbrk(0); \
        if (e - core_end < 0x10000 ) { \
          int i; \
          for (i=page(core_end); i < page(e); i++) { \
            type_map[i] = t_other; \
          } \
          core_end = e; \
        } \
          else  \
        error("Someone allocated my memory!");} \
        if (core_end != (sbrk(PAGESIZE*(n - m))))



The analog for linux (bsd.h):

#define ROUND_UP_SBRK(x)  \
       do {long i; \
             if ((i = ((long)x & (PAGESIZE - 1)))) \
               x=sbrk(PAGESIZE - i); } while(0);

#define FIX_RANDOM_SBRK \
do {char *x=sbrk(0); \
  if (core_end != x) \
   { ROUND_UP_SBRK(x); x=sbrk(0);\
     while (core_end < x) \
       { type_map[page(core_end)]= t_other; \
         core_end = core_end + PAGESIZE;} \
     if (core_end !=x) error("Someone allocated my memory");}} while (0)
#define IF_ALLOCATE_ERR \
        FIX_RANDOM_SBRK; \
        if (core_end != sbrk(PAGESIZE*(n - m)))


has no prescribed limit of 0x10000.  Mike, where does this come from?

As a bonus, examining this code leads me to suspect that we already
have designed in mechanims to handle non-contiguous sbrk, a la
exec-shield, meaning that it is likely that someone has made sure GCL
would work under an exec-shield like randomized sbrk, barring unexec
problems as earlier discussed.

This doesn't yet address your other post, where there is no "Can't
allocate" error, but gives an important clue, I feel.


Take care,



I'll be able to do tests you suggest this weekend.
At present I just recompiled GCL with 256K maxpages
but nothing changed.


--
     Vadim V. Zhytnikov

      <address@hidden>
     <address@hidden>





reply via email to

[Prev in Thread] Current Thread [Next in Thread]