[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Hurd Projects

From: Kevin Kreamer
Subject: Re: Hurd Projects
Date: Sat, 29 Dec 2001 21:39:47 -0600
User-agent: Mutt/1.2.5i

On Sun, Dec 23, 2001 at 04:05:43AM +0100, Lars Weber said:
> mike burrell <> wrote:
> > e.g. one common use of /tmp (in the world of GNU) is when compiling with gcc
> > without the -pipe option.  envision a very perverse situation, where many
> > users are doing many compilations at once (the system would have to be under
> > *very* high load), and gcc dumps its temporary .S files to /tmp.  wouldn't
> > it be possible that one of those .S files "expires" before the assembler
> > even gets a chance to look at it?  would this violate some sort of Unix
> > standard?

According to "Essential System Administration", AIX uses a skulker script run 
every night from cron to clean out /tmp (although it isn't enabled initially).
While that doesn't answer whether it violates some standard, it does show that
it isn't too much off the wall, standards-wise.

> You assume here that an unchangeable policy of expirefs would be to always
> expire the oldest file in the cache once a certain total size is reached,
> right?  If so, this is not what I had in mind.  From what I envision
> expirefs would be equally usable for situations where files should only be
> expired based on age (and/or some other factors) and a write-error should
> be returned if the size-limit (implicit or explicit) is reached.
> The functionality of expirefs (as I see it) could so simply described as
> "a virtual filesystem capable of automatically deleting files based on
> certain configurable factors."

As well as squid, a good place to look for this sort of thing is INN 
(or any other Usenet server software).

Kevin Kreamer
FsckIt on

reply via email to

[Prev in Thread] Current Thread [Next in Thread]