[Lustre-discuss] Problem after upgrade to 1.6.5
daledude
dale.dewd at gmail.com
Wed Jul 9 08:09:04 PDT 2008
On Jul 9, 2:19 am, Andreas Dilger <adil... at sun.com> wrote:
> If this is a branch-new installation (i.e. there isn't any data on the
> RAID that you want to use/keep) then you could run "llverdev" on the
> device to see if the device is working properly. A "partial" (-p) run
> is enough to do a quick test of the device, but if you really aren't
> sure of the state of the devices then a "long" (-l) test can be useful
> (though somewhat slow).
>
> The ldiskfs filesystem in recent lustre releases is working with up
> to 8TB devices (though not more yet), so this shouldn't be a problem
> for your 5TB device.
>
> Cheers, Andreas
Thanks Andreas for the llverdev tip. That will definitely be useful.
I ended up lowering the max # of ll_ost_io's that can run.
Went down to 15 from 128. I was getting the timeouts and
disconnects after I removed another OSS that had 2 OST's
so it seems it put more burden on the single OSS left.
More information about the lustre-discuss
mailing list