bug-fileutils
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RES: RES: RES: Bug on cp


From: Borges, Jenner Gigante (BR-Paulista Seguros)
Subject: RES: RES: RES: Bug on cp
Date: Tue, 31 Dec 2002 10:21:32 -0200

Hi!

Top output now , but it seems pretty much with when the cp was running.

 10:15am  up 1 day,  5:31,  3 users,  load average: 0.26, 0.14, 0.18
119 processes: 118 sleeping, 1 running, 0 zombie, 0 stopped
CPU0 states:  1.1% user,  0.4% system,  0.0% nice, 97.5% idle
CPU1 states:  0.1% user,  6.2% system,  0.0% nice, 93.1% idle
CPU2 states:  3.3% user,  1.1% system,  0.0% nice, 94.5% idle
CPU3 states:  3.1% user,  0.4% system,  0.0% nice, 95.4% idle
Mem:  2059460K av, 2030384K used,   29076K free,       0K shrd,    9324K
buff
Swap: 2048136K av,   80728K used, 1967408K free                 1834288K
cached

Free output:

address@hidden /etc]# free
             total       used       free     shared    buffers     cached
Mem:       2059460    2026156      33304          0       9404    1839800
-/+ buffers/cache:     176952    1882508
Swap:      2048136      80284    1967852

The only way to increase swap space now is creating swap files, isn´t ?
Do you know if this will be a good ideia ?

thanks again


-----Mensagem original-----
De: David T-G [mailto:address@hidden
Enviada em: terça-feira, 31 de dezembro de 2002 10:15
Para: FileUtils bugs
Cc: Borges, Jenner Gigante (BR-Paulista Seguros)
Assunto: Re: RES: RES: Bug on cp


Jenner --

...and then Borges, Jenner Gigante (BR-Paulista Seguros) said...
% 
% Hello David!

Hi again!


% I have RAID 0+1.

OK...


% How can I see the inodes and increase them ?

Hmmm...  I forget for ext2, but it is probably in the mount or mkfs or
tunefs commands.  Check your man pages.  I don't think you can increase
inodes without destroying and recreating the filesystem :-(


% Yes, I am going from ext2 to ext2.

OK.


% Last night I follow the steps from backup and I think because we didn´t
run 
% a update procedure , the datafiles were not accessed and the backup
finished
% succesfull, but the messages :

Hmmm...


% Dec 31 01:00:22 dtmart kernel: __alloc_pages: 0-order allocation failed.
% Dec 31 01:00:40 dtmart last message repeated 826 times
% Dec 31 01:00:40 dtmart kernel: failed.

That can't be good!

I forgot to ask: how much RAM and swap do you have in the system, and
what does top say?

I searched google for 'kernel: __alloc_pages: 0-order allocation failed'
and found numerous mailing list post, most relating to bigmem.  I think
that we can safely say that this is not a problem for cp.


%  
% still  appear on my /var/log/messages.
% During the cp process the top command showed me that the kswapd and
bdflush
% and 
% cp were consuming a lot of CPU.

Makes sense; you were shoveling a lot of data through the system.


% 
% Do you know if my kernel parameters were well tuned for these copy process
?
% 
% # Disables packet forwarding
% net.ipv4.ip_forward = 0
% # Enables source route verification
% net.ipv4.conf.all.rp_filter = 1
% # Disables the magic-sysrq key
% kernel.sysrq = 0
% * Shared and Semaphores Parameters
% kernel.shmmax=2059460000
% kernel.shmall=2059460000
% kernel.shmmni=100
% kernel.shmseg=20
% kernel.shmmin=1
% kernel.semmni=500
% kernel.semmns=1500
% kernel.semmsl=200

I'm not absolutely sure, but all of the shared memory and semaphore
settings are just going to be for ordinary DB processes and not pertinent
to a cp like this.  I don't think that there are any knobs to bump up
disk-to-disk performance, like maybe a buffer allocation size; I think
that the system handles that on its own based on what resources it has.


% 
% Thanks again and happy new year !!!

And to you :-)  Good luck!


:-D
-- 
David T-G                      * There is too much animal courage in 
(play) address@hidden * society and not sufficient moral courage.
(work) address@hidden  -- Mary Baker Eddy, "Science and Health"
http://www.justpickone.org/davidtg/    Shpx gur Pbzzhavpngvbaf Qrprapl Npg!




reply via email to

[Prev in Thread] Current Thread [Next in Thread]