[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Thread model (was: Ext2 superblock fault)

From: Samuel Thibault
Subject: Re: Thread model (was: Ext2 superblock fault)
Date: Wed, 19 Mar 2008 10:45:15 +0000
User-agent: Mutt/1.5.12-2006-07-14

olafBuddenhagen@gmx.net, le Tue 18 Mar 2008 11:02:43 +0100, a écrit :
> On Mon, Mar 17, 2008 at 10:41:01AM +0000, Samuel Thibault wrote:
> > olafBuddenhagen@gmx.net, le Sun 16 Mar 2008 08:52:56 +0100, a écrit :
> > > What makes me wonder is, how can it happen in the first place that
> > > so many requests are generated before the superblock is requested
> > > during handling of the first one?
> > 
> > ld-ing xulrunner, which needs a lot of memory (thus paging out
> > superblock), and then suddenly needs to write a lot of data, which
> > seemingly is not processed immediately, but on the periodical
> > sync_all.
> Well yes, I do understand why many requests are created in short
> succession. But that is not the question.
> If there are no other blocking points before the superblock read, the
> first request should be able to kick off the superblock read before the
> thread originally creating the requests is scheduled again -- before it
> can create further requests. Why is that not the case?
> I don't know how the syncing works, so I can't really tell what the
> problem is. If there are blocking points before the superblock read, we
> need to change that somehow. If the superblock read is the first
> blocking point already, we need to change the scheduling, to make sure
> that the request is handled -- up to the first blocking point -- before
> returning to the requester.

Yes, that's what I meant actually: the diskfs_sync_everything() function
is able to trigger a lot of thread creations.

A way to have things work correctly would be by marking threads with a
"level", i.e. diskfs_sync_everything runs at level 0, threads that it
generates run at level 1, and threads that process their page faults run
at level 2, etc.  Then we just need to limit the number of threads for a
given level.  That should always permit termination of requests, while
still limiting the number of thread to a constant times the maximum
nesting of page faults.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]