[Lustre-discuss] how to add force_over_8tb to MDS

Cliff White cliffw at whamcloud.com
Thu Jul 14 11:18:01 PDT 2011

--writeconf will erase parameters set via lctl conf_param, and will erase
pools definitions.
It will also allow you to set rather silly parameters that can prevent your
filesystem from starting, such
as incorrect server NIDs or incorrect failover NIDs. For this reason (and
from a history of customer
support) we caveat it's use in the manual.

The --writeconf option never touches data, only server configs, so it will
not mess up your data.

So, given sensible precautions as mentioned above, it's safe to do.

On Thu, Jul 14, 2011 at 11:03 AM, Theodore Omtzigt
<theo at stillwater-sc.com>wrote:

> Andreas:
>   Thanks for taking a look at this. Unfortunately, I don't quite
> understand the guidance you present: "If you are seeing 'this'
> problem....". I haven't seen 'any' problems pertaining to >8tb yet, so I
> cannot place your guidance in the context of the question I posted.
> My question was whether or not I need this parameter on the MDS and if
> so, how to apply it retroactively.  The Lustre environment I installed
> was the 1.8.5 set. Any insight in the issues would be appreciated.
> Theo
> On 7/14/2011 1:41 PM, Andreas Dilger wrote:
> > If you are seeing this problem it means you are using the ext3-based
> ldiskfs. Go back to the download site and get the lustre-ldiskfs and
> lustre-modules RPMs with ext4 in the name.
> >
> > That is the code that was tested with LUNs over 8TB. We kept these
> separate for some time to reduce risk for users that did not need larger LUN
> sizes.  This is the default for the recent Whamcloud 1.8.6 release.
> >
> > Cheers, Andreas
> >
> > On 2011-07-14, at 11:15 AM, Theodore Omtzigt<theo at stillwater-sc.com>
>  wrote:
> >
> >> I configured a Lustre file system on a collection of storage servers
> >> that have 12TB raw devices. I configured a combined MGS/MDS with the
> >> default configuration. On the OSTs however I added the force_over_8tb to
> >> the mountfsoptions.
> >>
> >> Two part question:
> >> 1- do I need to set that parameter on the MGS/MDS server as well
> >> 2- if yes, how do I properly add this parameter on this running Lustre
> >> file system (100TB on 9 storage servers)
> >>
> >> I can't resolve the ambiguity in the documentation as I can't find a
> >> good explanation of the configuration log mechanism that is being
> >> referenced in the man pages. The fact that the doc for --writeconf
> >> states "This is very dangerous", I am hesitant to pull the trigger as
> >> there is 60TB of data on this file system that I rather not lose.
> >> _______________________________________________
> >> Lustre-discuss mailing list
> >> Lustre-discuss at lists.lustre.org
> >> http://lists.lustre.org/mailman/listinfo/lustre-discuss
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss

Support Guy
WhamCloud, Inc.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20110714/bfcde4f6/attachment.htm>

More information about the lustre-discuss mailing list