bug-gnulib
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

test for utimes, utimens test, glibc futimes()


From: Bruno Haible
Subject: test for utimes, utimens test, glibc futimes()
Date: Fri, 12 Nov 2010 00:32:14 +0100
User-agent: KMail/1.9.9

Hi Eric, Paul,

I'm seeing a test failure in test-utimens and test-futimens on a Linux/x86
machine, and have a hard time understanding the reason.

The test failure is occurring on a Linux 2.4.21 x86 machine
(kaoru.il.thewrittenword.com) with gcc 3.2.3.

test-utimens.h:101: assertion failed
FAIL: test-utimens
test-futimens.h:108: assertion failed
FAIL: test-futimens

On this machine, autoconf has determined that
  checking whether the utimes function works... yes
and defined HAVE_WORKING_UTIMES to 1 accordingly.

(Side note here: On a machine with a very similar configuration,
same kernel version, same gcc version, but with a significative
difference between NFS server time and NFS client time - ca. 7 minutes -,
the autoconf test in m4/utimes.m4 gave the result
  checking whether the utimes function works... no
because at m4/utimes.m4:63 the values were
  now           = 1289514192
  sbuf.st_atime = 1289513839
  sbuf.st_mtime = 1289513839
therefore now - sbuf.st_atime <= 2 is false and now - sbuf.st_mtime <= 2 is
also false. As a consequence, on this other machine HAVE_WORKING_UTIMES
does not get defined, and the tests pass. It therefore looks to me like
the code in m4/utimes.m4 lines 57..64 is unreliable. End of side note.)

Analyzing the test-utimens failure:
When I add a printf statement before test-utimens.h:101

  /* Set both times.  */
  {
    struct timespec ts[2] = { { Y2K, BILLION / 2 - 1 }, { Y2K, BILLION - 1 } };
    ASSERT (func (BASE "file", ts) == 0);
    ASSERT (stat (BASE "file", &st2) == 0);
    ASSERT (st2.st_atime == Y2K);
    ASSERT (0 <= get_stat_atime_ns (&st2));
    ASSERT (get_stat_atime_ns (&st2) < BILLION / 2);
    fprintf (stderr, "%lu %lu %lu\n", (unsigned long) st2.st_atime,
             (unsigned long) st2.st_mtime, (unsigned long) Y2K);
    ASSERT (st2.st_mtime == Y2K);

two sets of values get printed:

  946684800 946684800 946684800
  946684800 946684801 946684800

The first line is from
   test_utimens (utimens, true);
The second line is from
  test_utimens (do_fdutimens, true);
and fails. Let's concentrate on this second call. do_fdutimens invokes
fdutimens. Here's where the functions are defined:

  $ nm ./test-utimens | grep utime
  0804aa7c t do_fdutimens
  0804aa64 t do_futimens
  0804b548 T fdutimens
           U futimes@@GLIBC_2.3
  0804b720 T lutimens
  08048eb0 t test_futimens
  0804955c t test_lutimens
  0804a168 t test_utimens
  0804adb0 T utimecmp
  0804b708 T utimens
           U utimes@@GLIBC_2.0

So, it uses fdutimens() from gnulib and futimes() from glibc.

Adding HAVE_BUGGY_NFS_TIME_STAMPS=1 does not help.

Here are the corresponding library calls (ltrace):

  open64("test-utimens.tfile", 1, 027777721520)    = 3
  __errno_location()                               = 0xb75e3060
  fsync(3, 0xbfffa1e0, 320, 0x804b74d, 1)          = 0
  futimes(3, 0xbfffa1e0, 320, 0x804b74d, 1)        = 0
  close(3)                                         = 0
  __xstat64(3, "test-utimens.tfile", 0xbfffa370)   = 0

At the moment of the futimes() function gets called, here are its arguments:

  Breakpoint 1, fdutimens (fd=6, file=0x804f8c9 "test-utimens.tfile", 
      timespec=0x0) at utimens.c:337
  337             if (futimes (fd, t) == 0)
  (gdb) print t[0]
  $2 = {tv_sec = 946684800, tv_usec = 499999}
  (gdb) print t[1]
  $3 = {tv_sec = 946684800, tv_usec = 999999}

So, you can see, the times passed have been truncated (not rounded) to
microsecond resolution.

And the corresponding system calls (strace):

  open("test-utimens.tfile", O_WRONLY|O_LARGEFILE) = 3
  fsync(3)                                = 0
  utime("/proc/self/fd/3", [2000/01/01-00:00:00, 2000/01/01-00:00:01]) = 0
  close(3)                                = 0
  stat64("test-utimens.tfile", {st_mode=S_IFREG|0600, st_size=0, ...}) = 0

As you can see, the futimes() call resulted in an utime() call - and the
mtime has been *rounded* to the nearest second. This comes from code
at glibc/sysdeps/unix/sysv/linux/futimes.c. The rounding in there is present
since the very first version of this file.

Questions:
  - Is glibc's futimes() implementation correct? Is futimes() allowed to round
    up by as much as half a second?
  - If not, shouldn't gnulib work around it?
  - If yes, is the code that invokes futimes in lib/utimens.c correct?
  - Is the test correct, or should it allow a rounded-up mtime?

Bruno


------------------
For reference, the relevant part of the preprocessed code of lib/utimens.c on
this platform:

static int
validate_timespec (struct timespec timespec[2])
{
  int result = 0;
  int utime_omit_count = 0;
  ((void) ((timespec) ? 0 : (__assert_fail ("timespec", "utimens.c", 89, 
__PRETTY_FUNCTION__), 0)));
  if ((timespec[0].tv_nsec != (-1)
       && timespec[0].tv_nsec != (-2)
       && (timespec[0].tv_nsec < 0 || 1000000000 <= timespec[0].tv_nsec))
      || (timespec[1].tv_nsec != (-1)
          && timespec[1].tv_nsec != (-2)
          && (timespec[1].tv_nsec < 0 || 1000000000 <= timespec[1].tv_nsec)))
    {
      (*__errno_location ()) = 22;
      return -1;
    }




  if (timespec[0].tv_nsec == (-1)
      || timespec[0].tv_nsec == (-2))
    {
      timespec[0].tv_sec = 0;
      result = 1;
      if (timespec[0].tv_nsec == (-2))
        utime_omit_count++;
    }
  if (timespec[1].tv_nsec == (-1)
      || timespec[1].tv_nsec == (-2))
    {
      timespec[1].tv_sec = 0;
      result = 1;
      if (timespec[1].tv_nsec == (-2))
        utime_omit_count++;
    }
  return result + (utime_omit_count == 1);
}

int
fdutimens (int fd, char const *file, struct timespec const timespec[2])
{
  struct timespec adjusted_timespec[2];
  struct timespec *ts = timespec ? adjusted_timespec : ((void *)0);
  int adjustment_needed = 0;
  struct stat st;

  if (ts)
    {
      adjusted_timespec[0] = timespec[0];
      adjusted_timespec[1] = timespec[1];
      adjustment_needed = validate_timespec (ts);
    }
  if (adjustment_needed < 0)
    return -1;




  if (!file)
    {
      if (fd < 0)
        {
          (*__errno_location ()) = 9;
          return -1;
        }
      if (dup2 (fd, fd) != fd)
        return -1;
    }
# 208 "utimens.c"
  if (fd < 0)
    sync ();
  else
    fsync (fd);
# 291 "utimens.c"
  if (adjustment_needed || (0 && fd < 0))
    {
      if (adjustment_needed != 3
          && (fd < 0 ? stat (file, &st) : fstat (fd, &st)))
        return -1;
      if (ts && update_timespec (&st, &ts))
        return 0;
    }

  {

    struct timeval timeval[2];
    struct timeval *t;
    if (ts)
      {
        timeval[0].tv_sec = ts[0].tv_sec;
        timeval[0].tv_usec = ts[0].tv_nsec / 1000;
        timeval[1].tv_sec = ts[1].tv_sec;
        timeval[1].tv_usec = ts[1].tv_nsec / 1000;
        t = timeval;
      }
    else
      t = ((void *)0);

    if (fd < 0)
      {



      }
    else
      {
# 337 "utimens.c"
        if (futimes (fd, t) == 0)
          return 0;

      }


    if (!file)
      {




        return -1;
      }


    return utimes (file, t);
# 370 "utimens.c"
  }
}




reply via email to

[Prev in Thread] Current Thread [Next in Thread]