[Lustre-discuss] Configuration of lustre FS on single node

ashok bharat bayana ashok.bharat.bayana at iiitb.ac.in
Thu Feb 28 23:21:31 PST 2008


hi,
/dev/sda is a non ext3 FS, that's the reason i think, my system crashed.
my linux fs is on /dev/sda8 and an ext3 one So should I go with 

$ mkfs.lustre --fsname spfs --mdt --mgs /dev/sda8

and proceed?

and Moreover, to run the llmount.sh or local.sh we should have lmc,lconf and lctl utilities, but these are not provided in lustre1.6.4.2, so that's why I'm going in this way.

Thanks and Regards,
Ashok Bharat 


-----Original Message-----
From: Amit Sharma [mailto:Amit.Sharma at Sun.COM]
Sent: Fri 2/29/2008 12:17 PM
To: ashok bharat bayana
Cc: lustre-discuss at lists.lustre.org
Subject: Re: [Lustre-discuss] Configuration of lustre FS on single node
 
Ashok,

Just looking at the command you tried :
 > $ mkfs.lustre --fsname spfs --mdt --mgs /dev/sda

You may have wrecked your root file system. Or is /dev/sda some other 
device ?

It may be simpler for you to try llmount.sh to get a simple setup of 
lustre on a single node (with loop back mounts)

thanks,
Amit

ashok bharat bayana wrote:
> 
> Hello,
> I successfully build lustre(1.6.4.2) on my system for a patchless client 
> But I dont know how to proceed in configuring the file system.
> 
> I'm trying to build all MDS,client and MDT's on a single node.
>  >From the tutorials I came to know that
> 
> First, create an MDT for the "spfs" file system that uses the /dev/sda 
> disk. This MDT also acts as the MGS for the site.
> 
> $ mkfs.lustre --fsname spfs --mdt --mgs /dev/sda
> 
> But by triggering this command my system crashed in which the whole data 
> is lost and again I need to install the OS.
> I want help in proceeding of mounting a lustre file system.
> 
> Thanks and Regards,
> Ashok Bharat
> 
> -----Original Message-----
> From: lustre-discuss-bounces at lists.lustre.org on behalf of 
> lustre-discuss-request at lists.lustre.org
> Sent: Wed 2/27/2008 10:30 PM
> To: lustre-discuss at lists.lustre.org
> Subject: Lustre-discuss Digest, Vol 25, Issue 56
> 
> Send Lustre-discuss mailing list submissions to
>         lustre-discuss at lists.lustre.org
> 
> To subscribe or unsubscribe via the World Wide Web, visit
>         http://lists.lustre.org/mailman/listinfo/lustre-discuss
> or, via email, send a message with subject or body 'help' to
>         lustre-discuss-request at lists.lustre.org
> 
> You can reach the person managing the list at
>         lustre-discuss-owner at lists.lustre.org
> 
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Lustre-discuss digest..."
> 
> 
> Today's Topics:
> 
>    1. Multiple NICs per OST (Joshua Bower-Cooley)
>    2. obd error on MGS/MDT node (Jeremy Mann)
> 
> 
> ----------------------------------------------------------------------
> 
> Message: 1
> Date: Tue, 26 Feb 2008 11:55:19 -0800
> From: Joshua Bower-Cooley <jbowercooley at lcogt.net>
> Subject: [Lustre-discuss] Multiple NICs per OST
> To: lustre-discuss at lists.lustre.org
> Message-ID: <200802261155.19907.jbowercooley at lcogt.net>
> Content-Type: text/plain;  charset="utf-8"
> 
> Hi-
> I've had lustre running happily for some time over a single 10g ethernet NIC
> per node. After switching to dual 10g and creating a new filesystem
> (1.6.4.2), I'm seeing nothing but keep-alive packets with bad checksums.
> 
> What is the current "correct" way to do this now? The manual suggest not 
> using
> bonding, but several list postings now reccommend it. Without bonding, do I
> need to have my 2 switches stacked, or will Lustre recognize the division in
> my subnet?
> 
> LNET module options I've tried are:
> 1) networks="tcp0(eth2,eth3)"
> 2) ip2nets="tcp(eth2,eth3); tcp(eth2) 10.9.[1-4].*; tcp(eth3) 10.9.[5-8].*;"
> and many other variations
> 
> All other services are functional on both interfaces.
> 
> Thanks in advance,
> J Bower-Cooley
> Scientific Systems Engineer
> Las Cumbres Observatory Global Telescope
> 
> 
> ------------------------------
> 
> Message: 2
> Date: Wed, 27 Feb 2008 09:50:08 -0600 (CST)
> From: "Jeremy Mann" <jeremy at biochem.uthscsa.edu>
> Subject: [Lustre-discuss] obd error on MGS/MDT node
> To: lustre-discuss at lists.lustre.org
> Message-ID:
>         <62901.24.173.62.147.1204127408.squirrel at biochem.uthscsa.edu>
> Content-Type: text/plain;charset=iso-8859-1
> 
> Today we are starting to see obd_change_cbdata errors on our MGS/MDT node.
> What does this error mean?
> 
> Version of lustre and kernel is:
> 
> 2.6.9-42.0.10.EL_lustre-1.6.0.1smp X86_64
> 
> LustreError: 65:0:(obd_class.h:1171:obd_change_cbdata()) Skipped 1
> previous similar message
> LustreError: 65:0:(obd_class.h:1171:obd_change_cbdata())
> obd_change_cbdata: NULL export
> LustreError: 65:0:(obd_class.h:1171:obd_change_cbdata()) Skipped 1
> previous similar message
> LustreError: 65:0:(obd_class.h:1171:obd_change_cbdata())
> obd_change_cbdata: NULL export
> LustreError: 65:0:(obd_class.h:1171:obd_change_cbdata()) Skipped 2
> previous similar messages
> LustreError: 65:0:(obd_class.h:1171:obd_change_cbdata())
> obd_change_cbdata: NULL export
> LustreError: 65:0:(obd_class.h:1171:obd_change_cbdata()) Skipped 4
> previous similar messages
> LustreError: 65:0:(obd_class.h:1171:obd_change_cbdata())
> obd_change_cbdata: NULL export
> LustreError: 65:0:(obd_class.h:1171:obd_change_cbdata()) Skipped 5
> previous similar messages
> LustreError: 65:0:(obd_class.h:1171:obd_change_cbdata())
> obd_change_cbdata: NULL export
> LustreError: 65:0:(obd_class.h:1171:obd_change_cbdata()) Skipped 1
> previous similar message
> 
> --
> Jeremy Mann
> jeremy at biochem.uthscsa.edu
> 
> University of Texas Health Science Center
> Bioinformatics Core Facility
> http://www.bioinformatics.uthscsa.edu
> Phone: (210) 567-2672
> 
> 
> 
> ------------------------------
> 
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
> 
> 
> End of Lustre-discuss Digest, Vol 25, Issue 56
> **********************************************
> 
> 
> ------------------------------------------------------------------------
> 
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss


-- 
Amit Sharma
Lustre Engineering
Sun Microsystems, Bangalore.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20080229/b7713b02/attachment.htm>


More information about the lustre-discuss mailing list