[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: New HFS Patch 12.5 fix a dangerous bug

From: Andrew Clausen
Subject: Re: New HFS Patch 12.5 fix a dangerous bug
Date: Thu, 9 Sep 2004 22:32:35 +1000
User-agent: Mutt/

On Thu, Sep 09, 2004 at 02:54:29AM +0200, K.G. wrote:
> > Does MacOSX provide a defragmenter?  We can tell users that are in a
> > hurry to use it.
> I don't think there is a defragmenter in Os X. I think there is an
> automatic defragmenter in kernel space, but if I remember well it only
> reduces internal fragmentation so this won't help.

What is internal fragmentation?

> Anyway I'll add a cache that will let find the descriptor of any
> particular extent in the FS, that will speed things up (should have
> done it from the very begining but the very begining was a quick hack
> to let me install linux in few days on a PPC :p ).

Sounds good :)

> > > But to be efficient I must find a reliable way to generate fragmentation 
> > > in
> > > the catalog and extent overflow files.
> > 
> > By "reliable", do you mean atomic?  If you explain the problem in
> > detail, maybe someone on the list will have an idea...  (or maybe not!)
> I meant reproductible. Using a shell script that generate files in a quite
> random way I have been able to make the "catalog" file and the "extent" file
> (which contains the whole FS) split and to find a bug that only manifest if
> they're splitted. But I havn't reproduced the split so far, so it's
> difficult to be 100% sure there isn't any remaining bug in that case.

One approach is to use pseudorandom data with a seed that you can save.
That way, you can reproduce the situation.

> I still have an atomicity problem anyway : start of data extents that are
> relocated in HFS+ can be stored on a cross sector boundary within a block...
> Can a contiguous write of about 8 sectors be atomic ? If not this is going
> to be a real issue because I don't think mirroring the "catalog" file is
>  possible all the time (when the "catalog" file has more than 8 fragments,
> their position and size are stored in the "extents" file as well as in the
> volume header :/ ).

Small contiguous writes probably won't have any problems.  (This problem
is called "torn writes").  I wouldn't worry about torn writes for <100k

> > Well, the Parted tests only test the semantics, but they aren't
> > especially thorough.  The e2fsck contain file system images of
> > "before" and "after" running e2fsck.  When you run the tests,
> > it runs e2fsck on the before image, and compares the output to the
> > after image in the test suite.
> File system images would be a good idea. It will mostly solve my
> "how to reproductibly generate fragmentation in special files" problem :)

Could do :)


reply via email to

[Prev in Thread] Current Thread [Next in Thread]