[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: New HFS Patch 12.5 fix a dangerous bug
K . G .
Re: New HFS Patch 12.5 fix a dangerous bug
Thu, 9 Sep 2004 02:54:29 +0200
On Thu, 9 Sep 2004 08:58:41 +1000
Andrew Clausen <address@hidden> wrote:
> On Tue, Sep 07, 2004 at 03:34:55PM +0200, K.G. wrote:
> > There _is_ HFS support in Linux (though limited).
> > You have :
> > 1) hfsutils and hfsplusutils for access in userspace
> > 2) The kernel implementation (very buggy in 2.4 but quite good in 2.6)
> > 3) A port of newfs_hfs I've corrected (an endianess issue was remaining in
> > wrapped HFS+) and quickly hack for Linux 2.4 support (userspace can't access
> > end of partitions)
> > ( http://xilun.nerim.net/Projet/Parted/Newfs_hfs/newfs_hfs_4_linux-2.tar.gz
> > )
> So, this is adequate for testing?
The only thing lacking is a complete fs consistency checking tool, but I
can still do very usefull checks without it.
hpfsck from hfsplusutils does only some basic checks in the volume header.
I could port the fsck_hfs tool from apple or write the check fonction for
> > My main concern is now speed : it took about 5 hours to resize 10->5 Go
> > with hundred thousand of fragmented files, which is slow :p
> > I know what I've have to do to make things go faster, and I'll start to work
> > on that in next versions.
> Does MacOSX provide a defragmenter? We can tell users that are in a
> hurry to use it.
I don't think there is a defragmenter in Os X. I think there is an automatic
defragmenter in kernel space, but if I remember well it only reduces internal
fragmentation so this won't help. Anyway I'll add a cache that will let find
the descriptor of any particular extent in the FS, that will speed things up
(should have done it from the very begining but the very begining was a quick
hack to let me install linux in few days on a PPC :p ).
> > Of course I'll also try to write working automatic regression tests for HFS
> > in next versions. But I must switch to a 2.6 kernel because of HFS bugs in
> > 2.4
> > and I still haven't done it for other reasons.
> You might find user-mode-linux helpful.
Well I've 7 computers at home right now (including a Mac of a friend for
so I guess one will natively run 2.6 soon :)
> > But to be efficient I must find a reliable way to generate fragmentation in
> > the catalog and extent overflow files.
> By "reliable", do you mean atomic? If you explain the problem in
> detail, maybe someone on the list will have an idea... (or maybe not!)
I meant reproductible. Using a shell script that generate files in a quite
random way I have been able to make the "catalog" file and the "extent" file
(which contains the whole FS) split and to find a bug that only manifest if
they're splitted. But I havn't reproduced the split so far, so it's
difficult to be 100% sure there isn't any remaining bug in that case.
I still have an atomicity problem anyway : start of data extents that are
relocated in HFS+ can be stored on a cross sector boundary within a block...
Can a contiguous write of about 8 sectors be atomic ? If not this is going
to be a real issue because I don't think mirroring the "catalog" file is
possible all the time (when the "catalog" file has more than 8 fragments,
their position and size are stored in the "extents" file as well as in the
volume header :/ ).
> > Also there is a part of my code that has never been tested (attributes
> > file) because I could never find a FS using that feature (which in
> > anyway not completly defined in Apple specs, and I doubt there is one
> > computer in the world with that feature anyway).
> I suggest you put in an exception which tells the user:
> "You have an HFS file system that has a feature that I haven't
> seen used anywhere. Please email me, so I can have a look at
> how it works! <address@hidden>"
Very good idea. I will put it in next release.
> > > Speaking of testing, I think we could probably do a lot better than
> > > Parted's current regression tests. e2fsprogs has much better tests.
> > > Any volunteers?
> > What new tests are you thinking of ?
> Well, the Parted tests only test the semantics, but they aren't
> especially thorough. The e2fsck contain file system images of
> "before" and "after" running e2fsck. When you run the tests,
> it runs e2fsck on the before image, and compares the output to the
> after image in the test suite.
File system images would be a good idea. It will mostly solve my
"how to reproductibly generate fragmentation in special files" problem :)