On Tue, Mar 2, 2010 at 1:30 PM, Daniel Maher
<address@hidden> wrote:
Daniel Maher wrote:
Raghavendra G wrote:
However, at two points during the multi-day test run, something
strange happened. The time to completion dropped _dramatically_,
and stayed there for numerous iterations, before jumping back up again :
Mostly reads are being served from io-cache?
Perhaps ; it is worth noting that even though the operations are consistent, the data are being generated randomly. I concede that, statistically speaking, some of those 0's and 1's would be cached effectively, but this shouldn't account for a sudden ~ 50% increase in efficiency that, just as suddenly as it appears, disappears again.
While it is irresponsible to extrapolate based on three points, my newest test run with io-cache disabled has yielded 10m30s, 10m36s, and 10m34s so far...
After hundreds of iterations the average « real » time per run was 10m25.522s . This was with io-cache totally disabled.
Thus, it has been shown that given a series of systematic read and write operations on progressively larger files filled with random data, the usage of io-cache is not appropriate (and will cause severe performance problems).
Of course, one could have postulated this intuitively - but there's nothing like some hard data to back up a hypothesis. :)
The real mystery is why the test with a small io-cache yielded two groups of highly varient TTCs...