[lustre-discuss] lustre-discuss Digest, Vol 122, Issue 9

Fernando Pérez fperez at icm.csic.es
Fri May 6 14:48:59 PDT 2016


Thank you Mark.

Finally I have killed the e2fsck. After restart again our lustre filesystem it seems all works OK.

We are using two 300 GB RAID 1 10K SAS drives for the combined mdt / mgs.

I tried to run the e2fsck -fy because the -fn finish in 2 hours…I think there is a problem in the latest e2fsprogs because the e2fsck returned that it was repairing more inodes than our filesystem has.

Regards.
=============================================
Fernando Pérez
Institut de Ciències del Mar (CMIMA-CSIC)
Departament Oceanografía Física i Tecnològica
Passeig Marítim de la Barceloneta,37-49
08003 Barcelona
Phone:  (+34) 93 230 96 35
=============================================

> El 6 may 2016, a las 17:57, Mark Hahn <hahn at mcmaster.ca> escribió:
> 
>> More information about our lustre system: combined mds / mdt has 189 GB and
>> 8.9 GB used. It was formatted with the default options.
> 
> fsck time is more about the number of files (inodes), rather than
> the size.  but either you have quite slow storage, or something is wrong.
> 
> as a comparison point, I can do a full/force fsck on one of our MDS/MDT
> that has 143G or 3.3T in use (313M inodes) in about 2 hours.  it is a MD
> raid10 on 16x 10K SAS drives, admittedly.
> 
> if your hardware is conventional (locally-attached multi-disk RAID),
> it might make sense to look at its configuration.  for instance, fsck
> is largely seek-limited, but doing too much readahead, or using large
> RAID block sizes (for R5/6) can be disadvantageous.  having plenty of RAM helps in some phases.
> 
> regards, mark hahn.



More information about the lustre-discuss mailing list