[lustre-discuss] lustre-discuss Digest, Vol 122, Issue 9

Fernando Pérez fperez at icm.csic.es
Fri May 6 15:07:19 PDT 2016


Hi Andreas.

The latest that I have seen in the lustre repository:

1.42.13.wc4-7

Regards

=============================================
Fernando Pérez
Institut de Ciències del Mar (CMIMA-CSIC)
Departament Oceanografía Física i Tecnològica
Passeig Marítim de la Barceloneta,37-49
08003 Barcelona
Phone:  (+34) 93 230 96 35
=============================================

> El 7 may 2016, a las 0:02, Dilger, Andreas <andreas.dilger at intel.com> escribió:
> 
> On 2016/05/06, 15:48, "lustre-discuss on behalf of Fernando Pérez"
> <lustre-discuss-bounces at lists.lustre.org <mailto:lustre-discuss-bounces at lists.lustre.org> on behalf of fperez at icm.csic.es <mailto:fperez at icm.csic.es>>
> wrote:
> 
>> Thank you Mark.
>> 
>> Finally I have killed the e2fsck. After restart again our lustre
>> filesystem it seems all works OK.
>> 
>> We are using two 300 GB RAID 1 10K SAS drives for the combined mdt / mgs.
>> 
>> I tried to run the e2fsck -fy because the -fn finish in 2 hoursŠI think
>> there is a problem in the latest e2fsprogs because the e2fsck returned
>> that it was repairing more inodes than our filesystem has.
> 
> Which specific version of e2fsprogs are you using?
> 
> Cheers, Andreas
> 
>> 
>> Regards.
>> =============================================
>> Fernando Pérez
>> Institut de Ciències del Mar (CMIMA-CSIC)
>> Departament Oceanografía Física i Tecnològica
>> Passeig Marítim de la Barceloneta,37-49
>> 08003 Barcelona
>> Phone:  (+34) 93 230 96 35
>> =============================================
>> 
>>> El 6 may 2016, a las 17:57, Mark Hahn <hahn at mcmaster.ca> escribió:
>>> 
>>>> More information about our lustre system: combined mds / mdt has 189
>>>> GB and
>>>> 8.9 GB used. It was formatted with the default options.
>>> 
>>> fsck time is more about the number of files (inodes), rather than
>>> the size.  but either you have quite slow storage, or something is
>>> wrong.
>>> 
>>> as a comparison point, I can do a full/force fsck on one of our MDS/MDT
>>> that has 143G or 3.3T in use (313M inodes) in about 2 hours.  it is a MD
>>> raid10 on 16x 10K SAS drives, admittedly.
>>> 
>>> if your hardware is conventional (locally-attached multi-disk RAID),
>>> it might make sense to look at its configuration.  for instance, fsck
>>> is largely seek-limited, but doing too much readahead, or using large
>>> RAID block sizes (for R5/6) can be disadvantageous.  having plenty of
>>> RAM helps in some phases.
>>> 
>>> regards, mark hahn.
>> 
>> _______________________________________________
>> lustre-discuss mailing list
>> lustre-discuss at lists.lustre.org <mailto:lustre-discuss at lists.lustre.org>
>> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org <http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org>
>> 
> 
> 
> Cheers, Andreas
> -- 
> Andreas Dilger
> 
> Lustre Principal Architect
> Intel High Performance Data Division

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20160507/78938936/attachment-0001.htm>


More information about the lustre-discuss mailing list