[Lustre-discuss] how to add force_over_8tb to MDS

Kevin Van Maren kevin.van.maren at oracle.com
Thu Jul 14 14:15:35 PDT 2011


With one other note: you should have used "--mkfsoptions='-t ext4'" when 
doing mkfs.lustre, and NOT the force option.
Given that it is already formatted and you don't want to use data, at 
least use the "ext4" Lustre RPMs.

Pretty sure you don't need a --writeconf -- you would either run as-is 
with ext4-based ldiskfs or reformat.

The MDT device should be limited to 8TB; I don't think anyone has tested 
a larger MDT.

Kevin


Cliff White wrote:
> This error message you are seeing is what Andreas was talking about - 
> you must use the
> ext4-based version, as you will not need any option with your size 
> LUNS. The 'must use force_over_8tb'
> error is the key here, you most certainly want/need to *.ext4.rpm 
> versions of stuff. 
> cliffw
>
>
> On Thu, Jul 14, 2011 at 11:10 AM, Theodore Omtzigt 
> <theo at stillwater-sc.com <mailto:theo at stillwater-sc.com>> wrote:
>
>     Michael:
>
>        The reason I had to do it on the OST's is because when issuing the
>     mkfs.lustre command to build the OST it would error out with the
>     message
>     that I should use the force_over_8tb mount option. I was not able to
>     create an OST on that device without the force_over_8tb option.
>
>     Your insights on the writeconf are excellent: good to know that
>     writeconf is solid. Thank you.
>
>     Theo
>
>     On 7/14/2011 1:29 PM, Michael Barnes wrote:
>     > On Jul 14, 2011, at 1:15 PM, Theodore Omtzigt wrote:
>     >
>     >> Two part question:
>     >> 1- do I need to set that parameter on the MGS/MDS server as well
>     > No, they are different filesystems.  You shouldn't need to do
>     this on the OSTs either.  You must be using an older lustre release.
>     >
>     >> 2- if yes, how do I properly add this parameter on this running
>     Lustre
>     >> file system (100TB on 9 storage servers)
>     > covered
>     >
>     >> I can't resolve the ambiguity in the documentation as I can't
>     find a
>     >> good explanation of the configuration log mechanism that is being
>     >> referenced in the man pages. The fact that the doc for --writeconf
>     >> states "This is very dangerous", I am hesitant to pull the
>     trigger as
>     >> there is 60TB of data on this file system that I rather not lose.
>     > I've had no issues with writeconf.  Its nice because it shows
>     you the old and new parameters.  Make sure that the changes that
>     you made were the what you want, and that the old parameters that
>     you want to keep are still in tact.  I don't remember the exact
>     circumstances, but I've found settings were lost when doing a
>     writeconf, and I had to explictly put these settings in
>     tunefs.lustre command to preserve them.
>     >
>     > -mb
>     >
>     > --
>     > +-----------------------------------------------
>     > | Michael Barnes
>     > |
>     > | Thomas Jefferson National Accelerator Facility
>     > | Scientific Computing Group
>     > | 12000 Jefferson Ave.
>     > | Newport News, VA 23606
>     > | (757) 269-7634 <tel:%28757%29%20269-7634>
>     > +-----------------------------------------------
>     >
>     >
>     >
>     >
>     >
>     _______________________________________________
>     Lustre-discuss mailing list
>     Lustre-discuss at lists.lustre.org
>     <mailto:Lustre-discuss at lists.lustre.org>
>     http://lists.lustre.org/mailman/listinfo/lustre-discuss
>
>
>
>
> -- 
> cliffw
> Support Guy
> WhamCloud, Inc. 
> www.whamcloud.com <http://www.whamcloud.com>
>
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
>   




More information about the lustre-discuss mailing list