|
From: | Giuseppe Ragusa |
Subject: | Re: [Gluster-devel] Gluster 3.5 (latest nightly) NFS memleak |
Date: | Fri, 28 Mar 2014 00:27:07 +0100 |
Hi,
> Date: Thu, 27 Mar 2014 09:26:10 +0530 > From: address@hidden > To: address@hidden; address@hidden > Subject: Re: [Gluster-devel] Gluster 3.5 (latest nightly) NFS memleak > > On 03/27/2014 03:29 AM, Giuseppe Ragusa wrote: > > Hi all, > > I'm running glusterfs-3.5.20140324.4465475-1.autobuild (from published > > nightly rpm packages) on CentOS 6.5 as storage solution for oVirt 3.4.0 > > (latest snapshot too) on 2 physical nodes (12 GiB RAM) with > > self-hosted-engine. > > > > I suppose this should be a good "selling point" for Gluster/oVirt and I > > have solved almost all my oVirt problems but one remains: > > Gluster-provided NFS (as a storage domain for oVirt self-hosted-engine) > > grows (from reboot) to about 8 GiB RAM usage (I even had it die before, > > when put under cgroup memory restrictions) in about one day of no actual > > usage (only the oVirt Engine VM is running on one node with no other > > operations done on it or the whole cluster). > > > > I have seen similar reports on users and devel mailing lists and I'm > > wondering how I can help in diagnosing this and/or if it would be better > > to rely on latest 3.4.x Gluster (but it seems that the stable line has > > had its share of memleaks too...). > > > > Can you please check if turning off drc through: > > volume set <volname> nfs.drc off > > helps? > > -Vijay I'm reinstalling just now to start from scratch with clean logs, configuration etc. I will report after one day of activity, but from the old system I can already confirm that I had plenty of logs containing: 0-rpc-service: DRC failed to detect duplicatesMany thanks for your suggestion. Regards, Giuseppe |
[Prev in Thread] | Current Thread | [Next in Thread] |