[Lustre-discuss] Experienced New User needing help
Klaus Steden
klaus.steden at thomson.net
Fri Oct 26 11:22:11 PDT 2007
No, you won¹t be able to write to either the OST or MDS directories, or even
examine them ... they¹re just mount points to provide you with feedback
about disk usage.
I read in the docs that future releases may do something more useful with
these mount points, but for now, that¹s all they do. :-)
cheers,
Klaus
On 10/26/07 1:47 AM, "Iain Grant" <Iain.Grant at scri.ac.uk>did etch on stone
tablets:
> Sorry please Ignore me, I have now done a
>
> mount -t lustre 143.234.96.46 at tcp0:/testfs /mnt/test
>
> and can write to the /mnt/test area.
> I thought I would be able to write to the OST area, apologies
>
> Iain
>
>
>
> From: lustre-discuss-bounces at clusterfs.com
> [mailto:lustre-discuss-bounces at clusterfs.com] On Behalf Of Iain Grant
> Sent: 26 October 2007 09:42
> To: lustre-discuss at clusterfs.com
> Subject: [Lustre-discuss] Experienced New User needing help
>
> Sorry folks I¹m still not getting any life from this.
> I¹ve followed the manual and the following steps
>
> Module options for networking should first be set up in /etc/modprobe.conf,
> e.g.
> # Networking options, see /sys/module/lnet/parameters
> options lnet networks=tcp
> # end Lustre modules
> [edit
> <http://wiki.lustre.org/index.php?title=Mount_Conf&action=edit§ion
> =6> ]
> Making and starting a filesystem
> combo MDT/MGS on my single node'
> mkfs.lustre --fsname=testfs --mdt --mgs /dev/sda1
> mkdir -p /mnt/test/mdt
> mount -t lustre /dev/sda1 /mnt/test/mdt
> cat /proc/fs/lustre/devices
> 0 UP mgs MGS MGS 5
> 1 UP mgc MGC143.234.96.46 at tcp 303242f4-5aa3-5377-4895-90a397d56153 5
> 2 UP mdt MDS MDS_uuid 3
> 3 UP lov testfs-mdtlov testfs-mdtlov_UUID 4
> 4 UP mds testfs-MDT0000 testfs-MDT0000_UUID 3
> Then I configured OST on the same node
> mkfs.lustre --fsname=testfs --ost --mgsnode=143.234.96.46 at tcp0 /dev/sda2
> mkdir -p /mnt/test/ost0
> mount -t lustre /dev/sda2 /mnt/test/ost0
>
> when I cd to /mnt/test/ost0 I get
>
> Not a directory messages ???
>
> If I do a df I can see the filesystems mounted.
>
> When I look at /var/log/messages I can see
>
>
> Oct 26 09:33:40 fraggle kernel: Lustre: Filtering OBD driver;
> info at clusterfs.com
> Oct 26 09:33:40 fraggle kernel: Lustre: testfs-OST0000: new disk, initializing
> Oct 26 09:33:41 fraggle kernel: Lustre: OST testfs-OST0000 now serving dev
> (testfs-OST0000/01493618-db27-ba73-4d41-6ab062fa5355) with recovery enabled
> Oct 26 09:33:41 fraggle kernel: Lustre: Server testfs-OST0000 on device
> /dev/sdb2 has started
> Oct 26 09:33:43 fraggle kernel: Lustre: testfs-OST0000: received MDS
> connection from 0 at lo
> Oct 26 09:33:43 fraggle kernel: Lustre: MDS testfs-MDT0000:
> testfs-OST0000_UUID now active, resetting orphans
>
>
> ???? This is driving me nuts ???
>
>
>
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
>
> SCRI, Invergowrie, Dundee, DD2 5DA.
> The Scottish Crop Research Institute is a charitable company limited by
> guarantee.
> Registered in Scotland No: SC 29367.
> Recognised by the Inland Revenue as a Scottish Charity No: SC 006662.
>
>
> DISCLAIMER:
>
> This email is from the Scottish Crop Research Institute, but the views
> expressed by the sender are not necessarily the views of SCRI and its
> subsidiaries. This email and any files transmitted with it are confidential
> to the intended recipient at the e-mail address to which it has been
> addressed. It may not be disclosed or used by any other than that addressee.
> If you are not the intended recipient you are requested to preserve this
> confidentiality and you must not use, disclose, copy, print or rely on this
> e-mail in any way. Please notify postmaster at scri.ac.uk quoting the
> name of the sender and delete the email from your system.
>
> Although SCRI has taken reasonable precautions to ensure no viruses are
> present in this email, neither the Institute nor the sender accepts any
> responsibility for any viruses, and it is your responsibility to scan the
> email
> and the attachments (if any).
>
>
>
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at clusterfs.com
> https://mail.clusterfs.com/mailman/listinfo/lustre-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20071026/bc89f36c/attachment.htm>
More information about the lustre-discuss
mailing list